var/home/core/zuul-output/0000755000175000017500000000000015135766215014540 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135772075015505 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000222452415135772035020271 0ustar corecorewikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ulEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5օ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKSLSWɀ;$#LcdHMeBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&?lm$K/$s_. WM]̍"W%`lO$"mw@E=Y]Q4`Iz_*z+>? ҿ?CWA˫ *Ůޢ߂Dm5^"d*MQǜq؈f+CtfRxyKfoc5Iv%*HǺE:<5I9qGIi'ޗX7w{:.B)ƸXM3“QLL FN+\r]IrbWoۢ,"~CRlnbu;BЧ>5ޖO_Tk@2pos/*eFZR1VF20:d T2$47mSl*#lzFP_3ib.63>NKnJۦ^4*rB쑣:5Ǧ٨C.1`mU]+ \+܁<lW9r_4cN .4_ `>6зd B0C_Ooa~ lzq %æjm3[x6BDhvzZn8hSlz}mAЦxCƨ||Yt,=d#uЇ /+FUM 'ul)b ;2n6Gk=Fnx ˭j}tu,R|ۯG8`&o+1wu(#'3"fxkuҮױdy-0ގx[O1ߕejw+n>9`2ŦzfhѤēA$:A ۥ͟յeP}@.供cЃ??w w@KvKts[T Sa oZaDžPA paan} 7ftJ^%0\?m'5k][i#p4@]6Uu|,AZ W?%TbzK-6cb:XeL`'žeVVޖ~;BLv[n|viPjbMeO?!hYf.WNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F ron~$dƹɥO"dޢt|BpYqc@Ϩ@0OTmBLi0lhѦ* _)[3L`I,|J @޸XFF"4En.O]~Nc9'\~ѻ|ky }ov?7"I8hp A&?a(8E-DHa%LMg2:7-ŷX(ǒ>?,ݵ𲛾é5Zٵ]z"]'Nh򇋲rTG_77:0@Iuʙ?&Ԕ8e,S55܉֏`E4&ZcOm۟GM]}yxYl 0JM"dλ=`Yƚޠ"gJT_>t8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfճ40+^\.ԍל\<ƽN!n ~Wޠ9dNiee$rۭyQ(J:w?o{zỮ/~+w_eaxx/q:yϵ*^XG-_⭶I96*(EVQAɨ-S ; dX#-EDNw/6uLũc'g}RH=: poX~E膂/g\IGֻqy,d$C?v01q5e.U?)]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'_|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^ϑsU~' Ԓ f\itu)b>5X -$sޕ6ql?N/e1N2i)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:VD)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?h04SRm+0^PTi-"] O('@BKD6 {NmʐzRj.aQcb^CZ-uvpr CѐٱlGNzIveca=%1Qi F>wTLHUGӃ\sA֎Xpljlv ^tSȻ \cPwίwX"{>9V0ټ_`#U8VdTtD_GU9V ұ{q:ObUi7s )B ۊZlzIA4S#x,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ%ܱ&6XƟ6qg(USok+Po$lwvmi8W_VT18V =| ub6QWCnY'"*aN08wuSEAVخ m3 o\` sHc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `5/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|$Ȃ1L-%;Ƅ{dɱL;V[bp>!n&աIJX1$9;[?- й vRCxKVV+#lj@_RL;IQ8ŢΌXD@Z< (1ZRÜ:OUM/vư{'jYXE4S/8 7: `/ +G\ U>]B2/n2=8) B gJ3bcKo̹ʇ\B~Is 2sO/I!}xV&\b<9$4Nve^آ]$LGF@LjKٕyzH 31Հm-XıUXF|\A-2) ' RG6h?āUŔyj[j_ӂ~ яA弆^bDyzǖQ8`jXbsK?l58,?YP5䜭ve9YFznTEf3Ja\,@2,?WYؾNr<V` =V[oB5!Z\ļǪЎr8@*ucѡv\[|s L-+y{5K@dzp`r"mũɸHNd"yc Pu>x2;W`_VR<aӗ&D<=h-Rר|/r _ǖھcߖ]G@Ն;UQG1 '3Jە Q88ASUȿ!:WѥLf21;d9OU᧯MR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv|$kmYE˾5uɌq;;M)qM^d{~V,;[V1b&g_tWZ$)(AˤJd>dKUar,c)JYUI2I5cUB"J* >#ʢL*qVؔc iUQ*FrEHh#RbԼ 7ͭ㬸=I.?A!T$2Vp$أ`Jc*tM|ٶ? z* mYC!VA5)x`Ctg^wCKƼd-8 ^~o˿2}}]a$=Oav%}|zBNo-,#24[| BSkg0tdb4]>:Z]o"B_ s3_ϦSS7Ϻٴ>g) {?Ehfoe=jAa+ `(0MM2ܟB!t?Y:y\[w՗Xo?1>tӵ])DSkT ?8[ -#d![^0wK{GBѨL/aI]K,Z$q=sb3?g:ge,LW+6$P@faz]zi%Ëw ),Ck?L>Z+J5J双}fĖ:<<2B ?+na" K|ikִ7Ʉ& (DO"QSѫ(V9IA|b2]&|R/ĤO.&U!:Utr.dQEzvGw#VWլ]%XKߒb\-ʛGI,Ik1IrlSAZYmHS-j/?+? e40d\O_tW(Rdr b%䓸/q']uo~uVh]&`pA!8) b ~A7<\/GIs^wyexsb(W~tF@ |Ƽ'}Sc&>LiLSvpJ"y`A#z&++9IT`{Ý۶.VydLgx2cz{O J<Ȓ@{ɯAIݍiQ. w9 ;oF0> QV0 3XdqBBa7̿IIG\& cVd5MN+ X'oIݽE_N74 y2ۏ1KC1KѦ(![!h,9K RQN2m{[&tFOyGG¿E*ytMjgQ/{i2 -* `upC3Ȯ_? ǧjrWϐ>t guγ$Nyt^y̦F}$ +Hڳ@ lce{ rQWDoسwW6Nj g2:aS]!u_0 c(?I6'Xg ?% HM-~* $`ߟ`: Q8 ZF:D_]2IPG۸ɉTnFsz4_.wwo>45ϓv1{}et!@/ uO N+hViP 蝋9 nddKO$mw@fmaRKSy)"+ H)]We~ɦ)ʟFq3t/ŢHzu-M><烆V8EABڥR^<"pv@ ^?<03!|5YHM_ۺ_} D+ * {35T4M~LA'FK.(e80Ky$~$Aa[@j58@uq|v5K*%% EB. jZFÇ݇1Q툖 4RIIU틬8lWE2MbNiT.DC[0*z׆LMYskQN գ$Ύf),l)~ AguhԿ}Y밞GˤMW4@րp\Ư} l]~* N_OiFm)}9YXfOgշ]ן@h9UCX _(2zm6B!\uWEq}WHm< ĉژ(Ywksy}x!]*UM*u=c{چ9C6Zk!" jH|J[p\V"WA3vOtTcYB!n<85@OolDslZU8vuE|7FE'+!妡2y-Bo2óiUrM84*3]5zI1Ҩ2ծ|e" CXCx *"]I.tmA-h+ ԾApk\˦-US2 -b7I T\WV)LrMԺ"*gP:I}%L Muc[bi:JBsmUӚ 8^&Ml+`zLMz Oc8N% #f#f[zpamIypU5-fݻ抰]"S欻&q7}QT_.H&\*1 Ad>@d"'j)Sc<0V$_ 1|Cѐ1o 8[buc=C#35Q,9Fդlt+yS-`Nk[A4jۊL_Ռ?kƦ?,onS7-Eܖp ߜ705m[m{Wx/>(e*wzwl^C>'a%Q*'VK[Ѯ$=|s{MZl얭+VX}FPUID=lU]ч_~JѴLZXwf̶|rir?˼8`cC#7cu=+  ]G>eS&!VњDV]?Jr`7z5DXpEjhv]%.P3@Y m}EW4HK;Z)y["(*l[…>Dziq߸PAw0lgts4z%.eK#*ul?-j&QikbGO%y-+3{&5*`>] .bBnwWe<$U25:>놱L+KEU+р֫ ?M=@`bJ̊c1Ufd` EC$\\U>xMNb;`UVw݌Ã|COP𪺔x  ,xw(1'n*L%<Y,`kEC*nLЩ̳TF:oHUe?k:4g )CS׳WXL7UCfɫMk?aS=νmj)VNeO@UMzF׳zF$a_ipE"Έ䊮v! kYTίQHmP╣uZFQ%4 Kk%LWLb"fLWjXo͵lGf8Mֺo^(>3 ]LQ gF50f{1o^ݺ G_Žgn]yvnY_e}),v%Մg1=ǐY5~̲=/e{5Yab nݏz&pZ{З&ۛ``CFMpl=wyf~g`6anl<'m2;h!P`<"s]l[{ ҚE/z3MCFudO@ eW蘹i{W$^~]{6}B7 6 Ы|6Mqev79qaÇn6#0;TL$,870~X$3#b1qy.78S'=Jk<~@7u b("ϤөogɴH" Le)=vFƃOk|&7'1n[n$Y5%<;[DăɗYr"qWy>^N`ꨢ&?/=p;3qxVilOw?Y*@%|_M2YVez|,Kls`݃zi/@u$UyNg?"^D?K6QAYtҁAv+h%<`mYxONɿ>q'vLhKvGCףdaۮ8opn-A SbНO*B~#?S0)"2p*uB3E0TnuP"h.\b<~Ph/bދS4bݵ{x F~ԏT>DC(&o]:fљ1^>wB|h_BB D (I<ǫ0$ϘQ|'cNAzNNLbfGFj]$10M6A0yHlx8+pd%!NRFAsa%`C(a.[@@wt Px64 tQ%`D/QT't6;#Hs@<M]_"]P" LsarԷEOan'ψ#%X Œ]p%D}KBT^./A(ѷ^`9Gt pwq_2@"dl &9] f]Ai+XnJ)qpzD)q%]zMs]NG=Z,s c#&Jhq).X4`,ifzC-[o0H4OϏAIG9a% cM;d Fi3?Y~=Om2Ԇ2f>԰G+?Wul8#Ft|,f#y-~2"]P{kqVKWy b%nB?*vȴOƠQa,qMF` jL56_ff@ɻ׊״]]bLjCoՐ pի;zr74:5Ш Ų{i>ܤd0\(HuX"WkGoƥa8J"ӽ940^BNgYY`_wQZ\ְM3|t&KQ`|#;|ZV{M]+eq6q)#OݯGq0R? w__7]"2?hS u<. Dj ,b;4B7S7QI)9E`@Ln0n?(QD A; Oh]=L00Xpwk4({o`G?=Fl{Ox8 T GV&;s\ =H#r^xVΰ&UL$UvW~aaOC9/vھn%hɇ+3Ӂވ.k 某_|S`ctVKin*3qU|*,hP 6Ut4`tp *}~ko -Jw#>P wn3 JwUa[݁jd2weԲ-ev#=R:FDB- ۞Po7B'mAN?P BuB- '4؍[nAh=nO$4܂PN(߂P=|7B [mAh=nFO$4@.a6. ,+ i@ϳfFDSݭ?6ߡhpZ:˻`SquQzNGDnU\D* Zx2ddzڸ H^H?CuSL>I?=|g$_xpxmHY^0?%  WE_0eޟA П0}!8 s}rH{p fxb2FIE1^\@G/VU 2J0",gA}鉶oηb pZ,p Hի<ԓ"͇) ϠqZPC?R4ikV߿?>CٸavEV#~fӲ!{nKI' bF/JC &!cW4Cu8zWemIl\ 0SiedA;AEօspfvTᙡ:D5NͰh'iXd4A ;CDWVA icH褙txp1\j^}-T?N h̏Bk8 I=8& ӅvDn##ۜ/Gz\czVk-Csc1;F5,U&j0+KQP Z LYDQ{lR[?z-Z^ӁAu7)rZHd, Ŀ:@7t:Nd-ntu !Vs}#Eà 8?FXiQLx0GƎ̦N0b.7GG|(.jq癐 b0TA\4vVk9X?bGKb x6A/pVY>Z^iW5.4vG^.g3pvʳ 9*`]N2ҊͰh DjQ{ 2_S6{+()eI'j93]l:T8Jx&1o&_mTRutZ^xx9~%x1=X#"@sJPг7ouޯSȍ.k{U}pzd7_ӑ_bMccg >6=AIۄM.jYkڛPuC+1 Dj4+YWv8#8kƹ3mN.&}$Sաl}%zÅuF5 4Zky^/?M]P=|Y{8_ typa Dǿ=%mg=wl9G7(pp~ۯ?//}=6o?Z0H_떹zeܼxvDن庾Vt¨.o5m\KFI|[i'k`'xFH%~~ۥig Ě@!$5~v_̀H SsHFՇś&2ig~<GP%.A VtPJ,.p΅_qW@/]n'PlDLBUx(o " hJkUE!O8+"CF OIc۠B A$H{l\ĘA[A gU@81Y?uS?73UAAi۲ʺdE" !<$ecMetF'l^͊BkoU"5[[5[#sV:HI,GuFX]p2S2&p,ê .EH:hm~-#3%Bo-x BV~IzTeWd4>r1!Q3mՒv)NFkG1ǘ4>1(A5&&E19&pH:~]pTc/d)|]ќPpghS>.^93AiR^$5 ?V]dƂ| 1-& 1#zIj;ۚ el92 K $"Lus#pL(t<@Ik!ɠ4"uAesoX\5c<}&nPbot*r Ngy-v;J))Q 7/Wӄs@(pMK{`g'> 0ItQt*ִ@[x _w jy P㋇ B DlQ$fz~那c33&SukRs+j.JE _`ځw%hQR*Ym!5g!]Kdd87Ys@] ,j%a\U% )9dV%ߗw)TKU%^>Svj9_Q: V  !Ρ2$6FK[M-):n pN% l|QLy4JA҉9=|\vQ1*ci(5Um r$g؄etn\$X JBt45R2EL ЏŎK@cXo2&"ʾcG;{N4z]4cOȲrf#'9zm5K6:cGO zY)隻]/Ҧlp<^|s1< ۬eyLjE6t?|邠КPrZ'V{T5qYȁ1䊡=ךDaEFqlK,d:F86TX_$H:Qğ?[G=6?(4}k+A%,ru(MJu^⼋vB~~P>bR~!yzz肣Jk,l5([.1,gimR>(bw5A s 7v*rgc0yrp, 4I^n>QXm҂Aj׼bBmtOzX{*ۭ5>uKf̺.~7/CwoOֹG#.7y\9nkh-Oy) 6F7&Z((9V6`,:'J]߆0vO7{6_mWe }, IUdńJwq4>nyQN~YbX޵;fl\:e֭|PіRڲbz-(X%y=0y#u䄴ۡZO/ӂ~~V_QHLWS~0p5CKeR0DnWV%3S#NS$X Sg؛>m}uP2_` ^&ˆI'8Ga6Ϭ/Ou9rOIs) 7"`Kluˀ T`bXů$ZDÐ" ۦK|FޔHHg6`=^jDxYXBBIp /h Gw ZE0Ή2Eӻ+%.d;2oCBo#RеD`+K$H:nZnzhYQBr҂g_n?.0,o1F[S,Tb2ωJLtz#;{਌{ϔh(٨)>q=plAV1*& CiL52p*W8Κ9')OrQIRcѶ]uG{I§=L`fZ|h|{8ڣ+"Y )"ty#MȄYAA'DU4dW-8R$n/=εP3AON/ԢaD )'HZmB&erQJD BlتaPۇeHȮO ͔7_[WB#B =%9 ziOڙyYH;3,=ʼّj\Is)%ZH|9*K9s"f$nvWik/` NۑfpV*8d&W1PIS :*p~n{ Iۛp;ԡc=Y𰞋)طM0QZO^"RB Nnk EE{#ƝC;ۢ(zJq )p˙ NnS$MLfGrz^.6]`^^?,BPTdG`amtpUH,gJQq$H: VeԎXVJjK*12j5"prsO3)Oo5qGѵ㦵ʡsǂ^=ԃ)^ʛ (Z!ì&(^)jyNsZ<̝[;4 y<xh,*D$Q r8 ߠQq xJ/8g`+/)xIMIov1L2l{=pWSfh;ty=cm>{ 7zXqdZ)V%^u73_Yn`ZVhI@.Ntqܝ skAZOof;33Ǩe6"Bm<ͨ5*=mƸysp_xI7zTL98а&z*Ihy[yX>,9xtq0@F÷^>ܺ<ňzS(-zv5* D#Ct*,}.P"zȈS {۸߿bw_Ap[I%:rv7=Cɉe;Ȳ$C6%rpf83JhSTfZzgFJkJ4=?Wc$@T+BP9˫iZmu9d،[Oqz% }8>R(XuP "7yhf>r2v{Ώs(`,(wHӒˠhi۞Y |yeSٷ4+Mscu4=g7uŖn,]٧Vpsfӊ %b 8ˏ( u|p>R9)d1%ʡ5ЪZ@P{nZarpa &8)RrH'*EI~C(SV$',4Bp*Tf{3-{ =g|^o c*q0j-uV!deDEJKk̵㾯hZ.]?Q%#s6p$_NTgGYq TPJI7W9?[qȞK}LH0=_răKJ_)Jq]z׬kֵMG~z&za@ cE|p|n0Jb SpY@ 6%YN y:$d|&{$x54"Xs:& ޸f䖮n߳P 2h%a="_rt0}K6/ ag~YQ֬i\֜_\>ߎyjF13hgp۲Mf)GAZ8Z%΃Ȣ5^ ,@}7J)G+!A) #$iD1FXc&AɏawK}ձF&Dbc~_luћ(b$5;h?JhNje.yPu;c0%J$z_ UX*P)yghY^#M5d Z.z ,Xg&W97!h|LBǣ ezY}g5鷜_rϼqQCVk$ E䗿3b3:/?CV4NB.# hXqQ: ,9yž75L["m%4h6`@n92wbZc>7fB s,>^Hhj旧Ofv2Jmd٪MmW -ҙw煔$ȁ$^ }`о)rq/Sem@ʶ)#_Dv>C-;`2n.d~I,Ga'K'1> ev]{ 6 _bs{7r(p^ jm7dmD"qi@BՕYB>.XhjY`dfJUof\J7ji)^ S7' Io3 4K[X&J9baxaPHHFkּW v_60}uG60yqP>k4"̄WGZphsz \w y, .UWiaZJi-Z=5F]ut%b)ѣ'1ΐ ~qgX>0H:p"}0ꙨPJnSHThqASA{ Y|'p;y5لVNhŭDBjI$u|;(e[k^pny+9V@|俤<6ḞT"]'N_ʣ@bj (k7$ /w!rr=?(kN}2CAҋ0O? VP拷!W?KWwvǟr8c!> @f垆_鿯^Cf$|c,P3R:s r $m7D-.;}:`67>5 TXg!g)E62=+lkމi1tfD8aYX6X/N 7v Jmu]GYG֒[D0NP!c}@4'ZxNv١tr T; nRX[) J wlvVnw7ʽ3Rx\&;#(?(۝Z-?)T=.LjXoB+ޭh߯x;贩MSss^ M3kFGq:!kvm[MM~:ߨA,ֈIM(^ B\PWΝ3-?<9JI„tm:5u^$_#d?{{{ҿtI˃fR@؃`"bRϏ~oJ֌HNZQ%pZ%Gr{C  4+x /Ӣw/Td[)z[A[%* [5IhgTq31O.J#w3^a:xZ3B&SO?9_u%'!5AniMjű'\ʜѴFѴshR*sX&O8 $0XV:Rb#JC2g4g4`)pƶw3V %PCC-Nz BGkÈH<qO41 YÅQ9iG8i 8CRV9c[D3`h; %,W8:DHYLG,149iG8ie zEfu_[Ω.?0R=r޸*:p/. Tftt}qAXYk K>_1h4M޿ MQ]1GATW|2, i(*ŷA_,$N{5| |2|Z ;j:WfWA9MIi]j TMdۊтU 9qHBc@ $-*60t_Crad:H/ +yt19I(`x&? 4;+̎yOqػ,e{d̊vlhgߍҫt~ ț &cmK3v'Uk@4 w&6LGƜo65:`oe% 4z{lGao~7)E|~ipyˊL/p{L{׳vͷ.og?\V?E9v? az}B.g=s5>fIښp^0++?7!w8 [Bp$tj/O+'ޮ RbtL25a!ZS|L=r8lHR.Ӏ-œT!(%=ېL ĔPHv7$pِ!%0 LqKw9R 4O`Bʲы[2!_?Y.8:2Q^dQ2\%ˀ#3sS7B#EèSJ/:eHEɄO61tweRA Ԟ eFic潊? Vkfyl[E̽p^:! 5TݷzOi9Bxl%ZFWxuM'^&Xa+K ƇfE3tglFl"0``m lDe'lTA Jz 1nHhi{IŊD!ĜbMH4c Ap̉N:ޕXО&ɬx6уzPQ/ɧ=|`7H ҺIkt .#G|G[И8vF`R(- : V)@Zb0TAS';,jlU=d Iփ5L-$Ipk $#' fZH[&AI'%Dt2%Ɍbz #n c+n`{i*nD^q8Mr4%1$sqoҙm u J V1z((2{7fiC&% 49.0fF >A9Tajci0fTWIbmd g"4aJc% !4ANR'U]b"g)ۑJPREL9HDRL g P8ƒ$R 3-bV[`)ךT{Ak?4êγ|F_C$} *R3nL =j]^p!lnDT]hFZ,@gZ V=bZX0- iq~PFBqUt$\b9v[9q#Iy]iUEyKf};>T5ҡNIQj4"EGB L)Y ϶UfjL>THH o籚f0l"9_GVc9eCF:q[N/}$$];5(^6hi-vhgHKuP=Zp!%Lf :2W'.BM>OC!Rc*U&Ir]O\,Ak,%XRM2sA2M1*)m~ɳXg6ًP%ҨG"kz#B=ܮÝEMXɆi>ӗBHgKLJD&hL2ӉV-=*f/.ZZ%+|[Jteb<ƈr9*?|F(ߍH8:u'0%Ȑ*BcaQ߼j3  A./_ ]1JK@1V;J}S颔~iu?oO."SϷ7[ J0bYK7Y, 1q}=n1=_ V C)EmJuĽm4PGY+4~lwT-Pg1P)/^( Q&ݺ,S1Exy OTkcJ /Ԥw^beGwkt̝H夹Ea#+@KUz}xۙO1kibhܟ+eڑJQCۯ_ڏ)mo"7C.su2c{w^cx=ϗA$ 3Y4c7zt*4~WԼFɵ+z|\;02&rx-E~z⾹2h}~^z#e2KymxPUy}A`ɵ-('5ϫyvzT|^)ǸZxoEcݦ06,)muZ؎ܣiyqJI4t9G"v>i?M{_G/A\ o7\+lպ\ߗ0ͽ_9^cm)#6 3a/ I~5t>JN4kI-L0&Nh; NN4lF&48UL.rP)8*iή"3+ɿqu:Lk~,jzcдlf-0-c-=VTA_~qE9nOw$NqOׅ_9- d˩0&\_ݬ\tMM~YggD[3$tWгmisԚihav1p<:)WoNUy7<];E>%ݝBxț#xAU5m|ҫ984\ m5=b3(εV%~޻=rBKр1#gb*w7RBT9/`Y~^U &˽_}~+`Rd#x8^|4_GNGhZ ZiRT# ƄqSvTmLn P48oRc xTPj"z b~! WJL7w-e <[{lE{0jFW?xk"ܰ<,za[ >RҲ6oA*Hկ #$& ,|(BJk+fFu$GŚ#*gN5"W#sLE`*Iz`:jҭoVz tǪ1ܧMoHM>a~cc!fp_nQBkۧҊq=O#cLcP"%;˧tK|Z B㮓xҬzV+К*`_&iH2U elX0FMe^^]&b/GſuQ^~QAwmbA}jC-@Խp EA%~Ex5թE:=%tŚ@79 I9)ȫŸp_EMLAX\kc6e PE?b4$$ӻ7f8[OH V̗SXqb_M6I^gTZ-0B)N\d+\郘Q*;i\?b26~csmݎ( r<q*t[dK[7>/{oy&w|r="F1%B,<1]V@CX} ;_#CH̾w;f0I 1]{F xv+w;(|6XGV)MY官ɭQf[acn,3AXO&;+-4'ktXL.f0e[4}Kv"֣8xtJ-FT!}r(' :Ъ~\>P%!E8pEJ ?~v~ O2>\CG_>!W2چ_,gϗ-e.Jnuu? d*1XpZË *'+`Y.4|nan4r1ޥAnfq~_| );W%90} ҙ',XqU=N;Lz3T9-$_|W`LF-_&gᆬ )z{,pFffF~һt*1ܭ.׌W/x](^Y %෗,_Aە/\mp2яlin.a}+`4oJNSyK/n7U C*ASO O,f!ztn;Mj gٌ ~F81ť &H%M6_V1aζyվ9nsuhH HmXP^]?8-(럾;/]<]jp1O,4~m6}1QWWf>BTSD)Ut:;V~Qu>,`}3q'#nI&qCa"F) 1!Xqpqwe͍I(6v݇_g"&i6buԩM$%7EBJ fNd_2+jAE<E>Ted;vȷO,0D)/ ъ-NF˙bY;r-8)&xN|D6o_۝OIq #2gC(Uf8qã9|$pc Aƛ kb:'mc 7r`1@^>By1|Ď ެwn\Vg q>涻hq7/k"g9)7{zȈ/s+I\b #f6++VqUXh&`'‹(Azd!@# : 4&pQh98R4xST(XbAZ9ۺGn|>EB-Ků7g`~%V7xxχm^>ǽ.Q7/nZ|ts8Ï7{n_P^Z ]zUx0[tvJ*CrTIY>eudŒ/~67~d)|[  k@܀Qqܚ0<:[|di&k\e|\*MQL@h Əy.| Rl Zǘ,~A jCWX90|Hy Gr 83jg[KT"9lNg2J\ W#9GrXȅ orٶ"f3-H4qYIiob'63! 9EzŞ`Q_cv\@%l̀+`"N:Js5KPăPυOzOS7ywW . Ԃ"CVĨyMJ7 uX&r._kt+/N g˒fpA>R:):`QׯGF{=CkR8in)A["2)=Mx`wO X|G秭pM} %3-ʧsՃ{" o:DrռnJ"r $"$ư蠍#5,͇| 6:C5< Ւ_h<G>T P<¿j1C]2,O$5da{F}kw0|,Jg|] gEJ9v3Dkt^'#9GrXьe_1#5Ȫ l$q.<$%΃JRD%v=#gX'jAbw9`5 )ה$,B, .M'UI1N.ktw-¬;W Q 5̔/| ș l0D3OqlM9l=wq# 9|4rΞk8(blgt:r^.?,do)d+Bjh{~L'Mę5})ө4Bi)Kp#" 2+p9x"2ctPǑEL,=l?ox:˽¿\3f"N5c`lOk&fo|5;x#X$4C֛b)cGF{=E\L٠kc}! aO&K}6+ ޭF<۵(+[MB^=E 1JJKHvPThP|E;Eh8R) 6Bާq?9;_[-D+/xR,4`eAF xtȏ8c0n rsc1RFQOlD[-4MΦ`⪫CvWH5nom(0__2sHH.ަ-ѮnFѪ:Q jpr*ʏ8GFi6Ьi,5[bgt稧9<&9c[\CeNZT >:lg'fC%,]2?mmo?a,d5Qj% 2TEU0TXϨQDpjG5%Gvq%x-<"U8$ѐ>5,%$yy`wѢG&͚[ۇV˸`5@5\q3;|j?l.C Y.&pUT͆ ϊb"#*PQL#MMF;vhW2Ƅ/ }|2zaMzdlCo}V|(GFͮиkk<16hؒ".4K5U-w -!evi!%IEJcݍE>U):8E5^;8I=4-}7]PgrԐ˞ntoUeMMP"L[\[9⼹}і xtbPc!紦`_s h3s}ה:{o>\Tޝ ~ܪRj-qbaߟ4p+\(y22Z؜`wc\tL gs<'>4.ӦHʕ'omg6|$MgiR.a}싲qT*Sv2ݡČ+ CA_P*,k`ߒěj >OByI<](:*'dK3yW6OH%8vymin@Y=2^O9|22tԄMsׄr/g(Nxa;4뺕 klBYuedST9jQ(MD @3Z "(09^a:F! *cU9l~릠6 &lKhkpV5yCŌ!<ZrUlDt!H J\k>E>VK2QtdVx<8]&oBJkzdT\`c&2sVr̷ʗiƨdmӶ[U# '8FpqW7߉ jtt&ٕUqVc>-*yxe>miC7d,c;K59Ρi0،%*C8[rٔx0`ǂyX>4_5g,IP40u)lS3^o-> Q;{dTޫH&b,$WQ|qQF{=|E|˜%I.7 yTfkY|Yӽa\6o.[ 2Bn!|,fl<ÓktD՛a͵p[%Iզsu55kMxn8;GG3^!w"5W%Μw\ iAilcBn7A6&`zd(Ӂt7"@uGX̤d}X[ȹJ̍}BQAi}PށQ)/0]ܮT' 𜱭9dXfޱnm뀶RVJ +7ݤ| 3]dm!ݶH31s Ox~`zXE_p6}gp f{\Ѓk5/{)zqYݥ܎Qr&f~z^5=t13_\=_Xfuk62oSŊ O9:e"R:V]*j jKQHOL7ಪ^VM;h&n/wOf;\K'tE[}浠nyɼ+zh nji70C~~ۜ폛1 /fc?u^S?\#j8#˧V/ߣ5>v:\AoHQ& JDЫ ƒ1RZ=x{P>Gܤ*A} P}{!C?|_6Y3M2&jk|Pc,qr? 3sn{Z+0VuBo{{w0moٙ}k5%8QV(bFԸ4k#Z{F1ZݿƇhT.j pHw.S秭A=Ay6QX.Ȝ^I*ţ!1cj6#K) c"xs㷂FNc5Ш6z!n֏R͑4 Vi(JXnGN$Ea8B*$TeRnW'{Ks Oa OY]?.Q_'}a{?f٩yEYDͿ.i&]?-7==z/Yln86=,7TTn痿xIS?qȿb /ߏ`7C.}~ЃߏlI$d[0ĢzXVD+!έ{c򯞾U° z PUhð ϗN+.YbM'-Y beTFV :vNDy ^< ~n}:LU׏ Z@QjQ^0w-1\90Rg8s>lMjkB-1GCwLu<=Hʿw1\‹_t>,Q^oBj40FH_i^2@y8{Ynib-e^͗ۮ-U^~-Yx8+O6_֗|9諗I՞؅E}J4Y!Lj 'ք$ǩ6XL"*N_IlL*NY$25¹Q}vr:c~N?ϗ~%Pݳ|9ߺ |eh/vVRHgU>lthy]mg/iG䝜s Y_濗OK0w jsW!Qm՞:b!|-vSӻX o1Ż_t~…G[_;s(,m>n^{Ÿx.wYoФG>MGTrQ&߼蒼lWsg|KT~OO jնƨ_?T{NvQnpgoſ^WƦf{D#l aww~Swmؕ4M  Ua|2nȝ]$xrozv s"?|T}c3enx0{4Δ,B;=!&:1e):,*x^T~m)bS߶ӻJSA+Tq2fL |g1֑ʴlm=h!-Vj1zCzC1փ8Yտ.YL-H5k6F#D3p^Y'(N8q/ Νp d2FY8`3;j:v:;87jj7)PWaxʎyߎ1RRKE >V,|A1zXLccE͆N+)slw,n6!'h6~GrN)HP>Uup(|T (QRĢiH7;M4I)]AHm,rTdflI%#@k3r8dFƭ2 .FFPv0[ xJց'lq8-(2+ɴV$ܷ"ERfXӴp2K>Z>dU4`,1$锒0s*@JT(% yޅ~cĘH9դ]yvb>/ 3 F逧xð3C){{Pw.gفcU-SDS|KTx4/.MJB;'E Y {@r5o%ad\hD(.2kvqYK#F$I)0Tx=~#q:.xzkߠ+Bv`Rc!cH<ԭUB. C.`*]|) ^7j3FBen׋(hW \zV߶o8y:q$*@R35o%'ɰ)rFWu:VT8f7(8J.Aි܄KHKt.dD%G"O"`BH'|=2 0F= ~# 5HndHamv9'B(J%xQ~LpHWԁ@p`-Z*.j;!~(gf3%1'*ڈIrY#RԊoGE s|twYn қKKgYYpr-",N$8>6n 'jT'CrZXH I>`]vE{Toե;4Wj_BO"hmn #@`;W3:Cݢ ;ylwp 'F"wɀ8FqS3ܠN,.6b)k9˱6vTG jْŔC&Ǣ=mP,pk:aOERH'*S&qVC^ X`츈&ΈKiw37s7AlM3Rw'`[t& apg1S ԁG3f]:Gn6n!mMn.Y\rf}(SG9f=9.:a>cYyqVV&È1;~fbc,M3>Qїz7ѕ:۸$qc"@߲Agp{paωj9"govώQ)qM+-?2ۂk-ۅMM,$Ǡdt0 C',M? R1D(4ImVigSfTԃRNqm}}kVPmG`bCPF=01byj́ҷ7:0 =!]p:p L;+.eĹuUDJ 2^b&emj~tWE?i "t*+13$'AڸaN8c4"D&I(ǰġbeU10q˗d#^4K>[X vZyԁc`W5{ vU\c~uvrCe˟bFFwAj(/w#qFw!tN Ac8jpÇn(l ͛{G=*C@ W6F JgHU1,q8cش Na:EܾS+J:{V8׶:6<#Fʑ1$y٤=.$3`d#wD4VnĉV17lq Lh.2[ J;%nܾNzX!W wF//h =058|}HzD Iu"D+vV,y8NZS<@10qv|ͣbŀ/*VA20ѰuKy[xߊƒSax|ɶ}.Qt8ik<8E~3p5')" Ν)iPP!ogŭLMmk7ha饰c7-CRKAՂx괯5)Q%&nR+MQF}H{ԏAĹi-أ''wr]e.5(ܾw ӾwJ[ioGEgPϓ,z\SqT22/QH$"`<8DuHnj<Ѧvv'AlVk F^znԾm i,YXÓs`vMs]=0wapM&5&m$d_ I$֍y"KɄ2ȉG)JZe u鍾b;;S!"4,B THmnp0=V'g<` b Qk6EQt"UƅYՓJS+K^W ix`Qz 0@̂ok4L8tj1Ov{w< cx'8yHα֙Qԥ$Jf&R%X9.fs)X 1is$s#x&(LHr*_^r <8W:T$PjVF֜ExFOz'p&U+ɿ:L:ƈPQ[GiF[,A1XHÉvFi(k4|9:tk ,H:`oK^c\#Oc QqT8P^Qr (x8N_ $  %gCavxbۧ1ʶ*"8;%S-R`(GiQWU'P"H‘͘@\y!,:RƄ|fAƭ)ss*RTpjtEqkMȃS..50^T4^@la V=Ä 0u⛎ j0tb2N:*hBq8tB8n{Q};F}{|;0(1' a1_ w{= v7$_ngnA6ٶֲHm{fWVRKmMfX kU"yi61䒜@׶Azt3.7;0ʊT9RWؔ6mJ=7σVeD]kN,YBK5 ɖDcދn.g J- >җzg$Xtz.-*S#gJXUbL!>U%XCEFRSȹ9'vEUPcncL!NnK)Ϻy;8U`qQ z gA[B5% )v ;)ħ@sh"R -EA?ӵ:i8܍1%8khEcnZfsʤ4LM1Pma]@l;!P~I'{{QD>栦ks{Y)arv:T;kEX,~|;IQ: xJe $eLo5;[W-gz}ģƿ/oG` %2V$PT)\p-bF_M]K ՟fǿ.|X8yTէfí[O?VvQ}w;$ll?w 㰗 cqlCƃ/4꧅/_~tPL%QS;_'~9|l?>{n@~XM~MÚ7.oCj=j(o./?֍$_{)Hr_5 cίEm"g-Ӛs&!AJ*1^b/xQjpD@n>ug;bCa+}>Na0U?[N%,?TlP?]z]nc<^x<&O ;<|:<(F6?ϯpVſf޴ݜh}fL(Y}l+Oixt-e/vwCKqs9OZUVWFA=N.=ٲ`JZ@{fUTj40[amڊ7qVz=T,ݹMj5vr[@__LV Bag3cpWxkiݺ?0ZNLjM\L~ ]UP(Ρ%xpOn"y(pL (6akYx)lKf_ 3rفWVe֮Qa9.E>_;{<@pj%S&g۲I~CFw )39RyIm+fP@Ͽlc b;#O?3EY"ӻ*Kdg*k$]6^ #s9(%|:Y` ɥUO@)l hf0I P=#ẢP{O21e&B`f `K գҹB+V ,.)4#jOBpr$g$oeGr?<ό2kE`NreJp!l؆畻y5uuK08"(59A .K=^r, _mÏFpdOA'{CI@c>L/Pg (b\ܿXwy?#irh[.:TM.X|n2]N¼v꿛aD6Mj{oϤM]}316͛54gդ: _Œ]|zp03&@؈+(Ijve:uC&$PR\eG4c2W[0|$:{n#Qq&n!XiC=oHTʌkYG,Q Q_1cɍs-rϯ6!_ېB*ѦHT\bI⼳ŵ >h<])ʯL*aN5F{UQ%I⇞;0cK|19ӨPT42FQĬ,!A|tn#ݧ t-D"xgFFfҐ )suPgǤ(' 4'5m*C,ц:;p7ƐhYuMIAX(d`(miY{c^DNpߔjMT.M)_ňrD>;^~;܍ f#l q\Nkt诎%{P )6NJR/,;e:1QaT"{ K {ܱbS@d1G M.&[_VXGU/E &sbS0'1br 2FSLMJl&e/زnitsyupĚHsPD , Ϫ%6HNI!މ4ԋt + 8L6hdfB].Ŏӂasv#) r6ە[  ~j/`M& -no+Pj_\ﹳ')_E_;my/c{wvQ,6-i*2- GjykT 3 -.&?5>$4]bs`,cK_PYfKi(n6 ɐc聤xŃߝvA9] Kw &z"3 _9Lh&gΒًΜ^Ue,A f嗲ȾAj l^ 6lܞXyo~Psѡ`V()sPyH`x$kn0+WnϸS!o1cF1$aHqwQ5Z@krtJ_-{SvjЁ 'W ]ʰem#x%H(B"ΰGF Ô 4H&`8ێʍ#~TGuD=:h-~G?czK>s92]iUXw >bT{nluPwwSxKhxK!cNr#o^HR"<S{#1`F,*3ra1Ja-#ʑ^T0ZZM N MرIqF4PG*KzY!ץ,wEMjgp}*ry?*28h)p{M@KJdizuJ]y I63Tr0?b7rI$dJEX;7H܆''p“S m" φ 8XdJQ"A68u[+H3).,xp.0 *frNi)W, k2JinRrJ_.lGSKo*H&2rWb곸||gMb >nc'5C Z,$Tn).p&tI,8 Ȗ\OZ\T u ܟO!N]M7FYw59񒐒2Dm M,T8`-VӰ´XI.ݵ䒈wIcOZ7>|nLԕ_(M(x%S  p jC۔ݗRwp fz00Kl4˘T"Z p=JfF_RnB'8B=98捸%Qmi . ɔvO)'3 z{,<> SN 'UɅ%f7vq-ºjH;rCm]Qzݝf4尦auDA=`+ޯBc\(H>tS?'N4٭n/UD])Lz 4%"iw!SZҸROXCxQH\fd2ͮ$X<؋XX:JMk# ٓz0ZdL  1pRYA4ni/޵'ݵlG0&"c2 SˇXtv%R0'L%[P 'x>|q75߅)80:u,@J4l6 EPP&B.o/s 0+դ% * i{/k{[BgT*$Ǵ#_p +!Kpby)-vӻTUL+ye8x -I_ƙ o}Ky%,/DTkз,vYG.;Zh5bYXC|6W=0I0WSsdr'k+8LerlSDNH!ޛY c( 4T$2via-bLLP0ؠ6, R8t)ĻB|iXrJ%_t5LKI4D.rV-ZZT^6ޮbnCxv(`!}S p6 )qHMF SbXL!޳t#&c +a#8S 9ShTzg4WK6ޓ8vWc?l."@Y<{QK1=ŧj=zq*Ud]$|^2h]- ΓE2˟V&vNr`:}h~$[d=:Lc! X\l` J#V7oW#7\Ok,%빯B>#_ 9 " 81jdWI@(0N QcϹ z{XM` G ǡUBx'_/:j;m*!p aLS` @E-cUWW{zreA*giP\)*iLK;6qrDm xg+*{ DcI1LZ ;3aī7OFYQkҟML\RH`67[Lpn 2*_!g?cm|zK2ni1e-<6+uMB,O%¹<0#Rm*\%s< JcBT&Φρ_-nDPT TKX={?]`,sf: ͤW2ej1K @.{Y(j /ȵ/һ"_(Rwǐ/PyQjc<·E  A#1J"hϪp{JO]ܶ6(n1;Z[/0tE[p;Z+)>w5M$̦rmC~hozeΑk]p#4}7gSG!'1i4y^;"FH WcdKx0\S敌<(j:8Of89{|iЍ3lf䦽1ӆ£PA``*, 9ei:rʴD+c6%'dT|ǹ|O&coX$&JFXi+MU3I.ъ-BLd P\ғ*w<6 |)4~gZƒtp3'  !q]Q$߄_'IH&+] ( Y(F# e: R]Ajyla,:и I(i^sJ'Ƚ#Z`ˤ R.Vm%P ʃb$L}%#q?2!U@?~?"7M_`ǫU0(r+?N2 l9mja~KOwxj,T J.Ѧ; +l0\>̆mVR x篞1f877M݊!M?NK̕߇ȵT[hV-{3샺sn :)#݃OކyP>~3Wm Adb%k~/]e v:w}hw C`ga!;ALw3|w$c;X7zX%p8Z'G1Fl$hxx:4 _#Ӛ~ZGj v349foړf~hF4]}/Nj{ &̝Z), j92ֵ+yusCvw Hq1`v*GMdBnxt8b^dn}beyw?p=Xl(>ӧa*=G^^ҝ斝7YF[vM`X 2.z|dP?ĥD %Msh6__َ'ľ#dը]y͎|g ;]i¶yL4v-V2J"Xr+6,De \ӼO+ٷ_O5xfbg7z" 1)e5d>N(YuN ͠sO(EaV{H,snJ EMԏ&|@a1_QiJĽHӐ Qn%j OE"_B;_%j /X֙(j YG,Vf0931q9N @h\;YQ \X`MzS4[>c\/\IEV+Lc-n+O* VEJ٥kM$am[2J,Zdt\=/GX}^TkMt8`9+rDcp)!>BZcA<kKjacrN6t<__i# pqέ -^G;!JOU4(zj5# aG檫Ry#)!)H$yTZ+g,?;8!kP&Z`A5J;OtKT#bjM`DYXhNX^Qb4DT:fLH,a EU;O! I&cIO[ d0mk: z Lxǵ_">qPⰹȈ\d 8Ւ"NK pUS>RMge/+EZoAЗ@<լ:S*-P(mD`B A3NAkyrlKw j /vBpqOC\'#Sԣ/˹mJWz1r*HYm.vBQpx094 (ukANjpq.Sǐ/Py-q/tz57Ս T֑_l2AsK%Z#ʜFe%9H)[!·Cg> Gqb%T::AF~ 'cPu]ՓӑHLgZmXS;c:O` Z]ϒ S҉qCl)?}d8ȏ`q2X¸|i|In0xlSJimq*eKr^DD|MB%OfS]Ԝϐ# ׾s bA\$8ehuKRN|p"s'bT0A[jcpQ8KYGP*e v֛8chPGnl4ksqxQx6@wL}}SЏ֢NWMλuO0(jv~D!&ts; TFJm9>gY3zϛ7jcTů'EDyX_9YD}'g(5%M[8+NQV6BT4G{;&½k2%!]_P>yX JI:2GġX%Wp?Ar Sc PuK=]ϧT?22,-?2-stLqa[PlGP9oLnHKDvLYAP ފu7'E_O⌫ lT0kE~ &@I2K##`+q"u|#6Q&p!W?9 i1b8]^i9xF"(w!;'W2)8ځ1*86iv ?͖hʖ9:& *9=\;1˜Bb}ƕ9Og\.%MP]sXQzK )Z75>Ruuɱ9%NJ{#Jcхi>4?JԿ|n]tYgYPI1qNJs^^p1"벢AJ.jcT·qD,b7UP|1]M&`o~_?eVSǑԆ!G6sFN/-Tn;&H) gGe>J\1 å(eï4힖"4]SvlT e: Μ1xDă)+`*hf\Fie*<8?12h)"M@H.deV.GF0&P'~vNuËI53 A\a Ba"~{;kX2G1flldXu46wpG 4wiSh.1u,qp#60z+H_: ɿv1.gc#(S ict[3^SLG\XW=)SHȭrHXnskOa)!oP?idY%S pz޵q$B.CmxFOE2d)6垑0AUwTu$(1&E轌 !'rPC:0tx15:Ҽ{xoN$t`ZFB5\RjpǦRspYrR3С*HEm6u!zB[Y޳>^^qzi3F¡ўt.ud U~/@ڞ1:,VIwd[pD^dH`鱪:VP } *Q!gUV6x5$B۞M OZBzBHg|vqHd8ˉt@Vў1:N@mq~9 @p%VЩG:+Xp&-@]]+eDrs>ŏyD ] [dE01;B 1:b%JNaVmXмkMWUNbIGcc2~@ZUn7箃Mu*wi:>#cua7Gt;t!܎ۺ&Zz`|ozRNJGsLCt,cyvuD.FAڞ1:,+{}L')XIYb"Mxs1FQ=gF'E<` 0"2@*/N)Lja;2ibSgG V#x/ V խJץnvI }{>_Oǜ]v,GR. D;fjRg0!w+)lF5JQ)d+c ѱpDߓ ܙ(uOb0,\TԭU^洙7HQc2+oR@wx9N1𐛌6v+X#s/kD9"7 &"la탞.ҀƘOsct,RxQBYdFX6ϋ78U*yBcjֿÂf̋!חxZJLH}wdeSUV+FHK1:Nlp~l![6Q 372'f65@ǒѸ{7r{Q P6fA9 ߱\*vc{LH]VvOR gy=jTv*dj׌*3MGúVa܎5U3GԻdZ&ޭ8p<U.,9j{wiY/S4q ?ܘYdܘ8(O14=>9ih1 Lݼq~vxH R+"._mߛQV_-@VV Y"P/s] c? M~54ܝOܴcg7>S}eC!f-) lp6HFOk/̎Jx->c]N懿"4hM Ǧ[a*Oa*ߍs~<vͷWavayn6 FxsܽaN!O>~g=_fMkl}"\Ĥ7깥ގD@[כ27`1s^c;bb͵h]ni[i;n >V-8<T?ڐǣ`R.vVO_\k.l 47^{Q/xn~(@`/ vjgKmca+6gil=:QYMK5ʹ8?D!_hhj [D 7l>zd qXd2ՅL]eũ ҄ n̍>;RDz*|d%M?RdyTd Kx) Gѵp1;JO"eg^t8&<di@5ܪx9Α &nEAQӏw#\pѱpp߮#*%h!T! ' %ibxKkginl[;4`fwa<.uLj8RO5u;x6炷ޜZZwcEC­.vX(2_%xOAuB{[d5g3FaY}{kianYI]5rsIԌu\V~>TV=ct-^9D.^:LK/ v'ثe-͟vJSc%/+pLbVE6(@_ƣX[h$J/)v  9[t69Ub#AgK+d3Ct=ϠPyeasNg>B ^ aH{3{_6& s(`2Bgm`>B ϩx Eőq9|+v%isx %h-t|+˜R˦D4bR G LG0Sב?O_nu NNԧ ۊv\ 5RCҦw'I_M&ŖA7)D;g~Q?=[|obli+[ N{(irfP(S)Hg ?LRg>3(` U\3`>B XK(|1,7HT '}K|E',Sr(`r 3sCP(B %Hp$祮<0h̯ 9Q! 5T(O)u ?K?qTQܦ Ȣ=Z)"##)mE;8eY})jL<E9Y >FJ[)̸1An]-ÖЩB x [ EPXcR~L8ذڔWuMS{[cR/nYbFÿ3څ^k.kh}/[}`_&!e2\0%*J!LxXhgBI"qT7q'>a䐤29ax^,1t`[X01003GEm2u\|S0r ݇P.RcGdTaaS2Xbv;#dG(IQco"XAfbj1T#4>WDD@H,vEG 9QxR)]}붵u u0 ?BS3lze5'w+|ظZ۾,'tonJgAhhx-G.eFϺe:+-K90\WZ\qu*`+搮, *NcʠN_US :ʰ% Zp؝#BSϘ8P=l:w"iZ+I?xq_A41a,3}ҝ!v#1ZHlCf;?f%NM#g4`[2RI,%tt4'ujR]%8ǞT"k/%%\2N,&ꎒh6pk`ǡ+Ўv#J`cɬL]8\mE^;?}l\؛i-Cp)9' NZ_|Q?r9-}%n zXv$Su hkUX#^U1W]e^(RªVgy7]}^Cz%?Dnj6 hs鰥TH.5mBNT5$]pHKX.q秜~r6œײ+I`M 3âҌ;dJm 0A0 (O 2zܱ8]0N3`Xs_kF =Z`osAYe$xrRWQS疺Q$0:4]("yHh5GR)2/l4X@ܚ(p$ 'w+Xqd^„\6b]۝f~K27}]\U!\NѦN1fPQejR{)T'dz9ô$Yl@~lyfRh&$V:<t?g>9sWxF!blb t>\Q`걖?)2%䊢_V}7Jvz*[o?-^N% mDaG#L9K)(ԝ7vpV7ݜ- 85ivOWtr~]үiU ,ޯ^&P>OMQO59%nFu-{qnc\)$VpoKʗ|..tN1hb5ӃՅk@h~Q]H^LfB nu~m<|weM7gʰǟ.o Vgy0ؾ@_|V=z< 6-rKzGnt9"*s _j{<( XRlOȭohl|>~nPHi!f 6[ 8HQbFnNb$ r4"=EDN?Q}&:Ӊ ?{6rl /EbvEl'@rnHc$h(aז(S}Z>[KHWEsN;ԛOck73yhyDs~s1(-K\Y)/D<^]d4K8gł[-E-6hCa=81ٜEj(wݶ~8wiFoƩO#I"u%BDJ;H]ԕH]ԕըR\!3AnAnAn98{Ìⷈ=5*rT^褂R3j^rX8DA:˹WuѶsGvp8Qr2.m`[aژ'nVEbROA'ƣ_?jD# %c/bA#Q4N%0sRCx|O#0fŘ!hߠÀpaβ>NrsEH;luyIr1!4UaH_="qraS%fzq1}`3r y>(wEy;fإKd/%. v4>VWR{] uF(l(U!t,bR@] (6:=LCpf3)zY"Y۟un h'&9wz%.gLxKeRR"ӔKgHYj3IH ͻOdH~0u.XuaTHf8< 7K+üxfZf:%BwdkVqFGc㋵q>y}n2%V$&%&s 6Ļ@2|i~G6Lp)6/у`^|xهqFn5dXa Le :1۪lTǗ4`PS ~{Za+eNJ2S-c+,-0y@հONl:5i2P6lQAv=i1,Њ(hGZN U@ mN#<p{ަyƊsZ6ieɯ̡T YƏʣ$wQe{kjg*^fZ۔T^}Z9׺eu>x9p e(rM)*ŀN'pEqF4I}V9(kS(B-UigݡWiZJ'*ǐ$ˋK'Kd݃e(z.*Zc7tԐ-CZX_C;^~sAuŭܢX,}P!dq:*b .땓`b0]<; -5?yWΞ:j;?p(Hce8QۂYդw>Y *-=d w[Ϧ\] "yqYRf {Ce$h~"5\6Ec!5=Ek!7cx+ol!̶zҔԇ2eͥUMl7ݔoTdj۸ɞێksfė`^|9Z~?k˒axMT7tOmBRT阡}nM7/o:LG}I.j,U)˃-٣L:% IgQ+W&OGZgosK 29LfA$J2Msq|P+Ak+/,m-zua֙!&R۶6VnvW!'TPYYєYz? v՞az5g4,`nD%ZMY4KOhCcC)Ϭ%kVܖ=/LȰݷqdr=~:7|}`؊:~lړzoX o^0 2K:O抱L0aYD:!*ɬw6%JU:ːjyCʝy*#%ϩ"iF* hgRL4iʽ#SuROoHJ,T' <'0FKG1^ŒqBRffJNC!tY ls.BMi a06ǴY`m4L Õr*Qi$*99zol؂{PDjMZ~cڱ˘rV_TERc$5FRc$5FRc$5FRcdDRc4BRc$5FGRc$5FRc$5FRc$5FRc$5FRc$5FRc$5III/ܡApS;מ=$UҦq3 N$C_ y<.FC.bT"IH2ф @YnaMKBg(;9ؑdTqbܺDPuhS 5H$wC$̠RbxhϘَb};ҵEsvFBL%_$Sz5Vg6`A.fc_mrƨR?SG*7V,\jjf,3fʔcc/~ _5赹,ҭj]SNrBc]EV#)16O֏Hw-JE4 {(er Xyjp|:La U[k4ûvNrh][&K`bq4g߳<׼>بުaӼ@$քĔ/7؍Ϋ-@t[AL;K<ٸz^WO`2x9qԅ#aZ5gR ə 3y0JbOSř𣰱{Vx7? )_Ir𭌉xOѡR5朾!NvRmcE,uRkK{zK$A4h vS&fch;j* v&)5u]wWZ\Ă%a)U e ۰F < jD!C\N.;KosAU.r%hJ1&YjPW!W ֊QxM̀+ rd/okmzp@S=$^kAMx ~7 i_O.,"l] l6;\im"M~`+=!-IHHHH' "R+ "+#"+""+""+""+""+""+"rCrE$WDrE$WDrE$WĦMJ"{#7"{#7"{#7"'7"{#z 7"{#6 {#7"{#7{㩕 Aw#=$C"=$C"=$C"= B=J|''''QOZ~a逥l$J6\"%ŠHpRqaEKRX3qi]*ND%02GRr$P'RkRe9Ĺ^vjAOHzCGQFGDHHHHxz@OqR>"CG|F|DG|D?`ᘃqMϳdSJ>FpY2v2fԗjw\+9db  vUZQ*ͽO`3%)HU@\,gtuW[4% uZ4.z[~Tf(Zꢲ]nX}saШjO\|vloTb*!,DdQ3,d}Yj@jzM[A.eeI(j-qBɭ&JfD Ta|ҷlR_Gڪ{xK[jWqP uz{ #7^2>ϤKS & 8L65:I3]Ӌ}.Vi=osճ/= /x ETPjs*·:MKMxAqƇ%;BZHl[J 顰['>h?N`s4;I%6_/nB_V_+R^9T3Ah l?w5ǮB&Dlw[z5$4yɸd{ǮǻzmIPC4&8ww-x1iMb6g~w &rKRJ˛_3M Db?qof+ 1@CT Ap>CI(O\N01m5%bCxJxZδTsdZԃi(I0+LCnr:e\bCl(IZl(IހP~Ʊ̒*mxULdr=~:plwpo^ ]y~ tkd},KT H2M}n 1MܗiǙ!f~)$?@S; 쁁i ZאRϪ w,,KGPtKUqw/`v &p<a13GXhXk+18n %Wjz09O-,uyDmXAy}& A= mh k߂Q}CG嗤$Emq61 \R2hoIb&TohPQ)0,k.@S.$ja^t`:Y3Ho`lOZa|q~rWki @Qr1҂E˞]&7qE<}|08z`[ #m^Fd28Ԡ`ՠ"lxi3I)U!Oa[XRq%,+=d/)GRJu[i)U֥HY Ld%˼AR6ŅB!ԅgXo,q>eF{$w5$XM!AFz]T>Z:7uɘ˘@6>/+ q9Qf1Tu/cIYO `8ޏ<M?ure7nZOf(*bs$QFip/nP)WJL"wᝣoAvlzb 6::P쪻74`ƦBUVQ XגU̍u6{, 屺W^U,~rWogiϐUo"א5gƸ 𪬀kU0,Y wT'}JI: sU&sҪbsQ\>yR9'{vx AX>HfK#ThWbJ.ʒ=k)8ZΆShZSuP6u% 3迧-u|6mA7~;IO߷F+?}c]7\_\'v^?FoB~.Rsj/l֑[ߎY}BLyg 1ۇ܏7$\<.l EI#ۺ'힘O,%|5p CwΣ)~1?a/G?;!Ėt2uZNk8elQ9'|1#z=prKr[65Ju~wEZ|5pZQH<UT>)%OoG QsH"`3+'׫Yry88@8s)Ũ'jXP;-[N!+|Q%\-$r@E1? QЉk;bFoenT/d R-+޽06>u /R1i]!z?ZqbɶaH ċ'YSR%dSn偣Rrه\U Sѩu a 09COw7rT-:ƍ] ΈM:A :>\/)29Al4_u~1Ԑp_6jdz_6jyZjnT&Xyp+B۩{9KW|7G3۹ #'`/A`5nn34崼]8F]/Yz5nƛe.ضD{ގ~=ƈ 齅B =*tA7ry!No\|&d<<:Ϊ`[+Q68B#t|-"VsoJ彟=v[;/ hoAb@ǜk;GkTܣhFXqR"sƪ>"h˖h !%Omi8PfoRnXAȮS(>GzU"U"1%ʈj}$)*ZCni~`3+:ZY#V4NyeuPFi2A MoVΩ?$dIg%JH0R"FS*2%tuؘOJd 4$O_KIcPOs*T2#L(URP)I%i{Զ*RJ?Y^KKʺjmEV J@RIGk){,/m"Xa2֖nƬ&ۀQ)ZKӤ΂}9(+ζk;U Z>|gqc.=C*$)BcP: UT&YBFI)U,4o3 $8}ʌTKڵ5dsmὠBH0j#i|ˋw>f(׊;YrP!dHXٲV}*e"[כ$PLNpsںpC )ɑ V[TB@)JA; fo$?>́L`9w"EV:zn|ìK"f y.!LZb!OdI jYy-2PKQeb[*WB&ɍQAc&j @L3K쪎Ä#\ úDv;@ð&)Le@F۠)$V"!,v-(*01sVkAN=0*U܏8:e$N\ PxdKJC`, SKkCPG,j0F+ bӲg*){͖ 8йv%V4,ZXRjjZ(_edqzJGRDX̹X tޕM0 ߫Rj{PPĹ y} Iր_tNL^h8pX+ u۝Azv6fxyk&:6"SE1:8)`1F;8NgHBݎn\ $wx~X&"CjHQ,zPxcr8S]]U_u}&XmY;7챺>^+FDuJYt4#+>$ V*ʈ0SSys ~| VsB( {' @&|Z5`>xmBV5( yYX1Gi/!dyh횀IQٰ̐[4̈}R@9`7,Jx-!5\́ B[uD{E /WRXS!uPXf:hT^)A;bo^%k슅U>?H~E8ij)KB69,\NhyŴ L;J,*Bj:Gaa^;:+~(])Yj0Q30bNmS5dRr3,JS 6)jƍL(4 -l]?\뜼t* 0FW@ HzIѾO3bJn 7b|d(N;Ş +Q@, %׆!I`nk^HCzu;TS.*fHRz|1 3MZKE@$6Dͦ K)`jIX- ^y'3!wart ,0C'嗥^T^t5 3Bv]%t5bɤT Epkw@D0B5vQbx8X?W̡6ܑZIPn%I+-ɕ`U_kOL =bA0 XxL & @G/ L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@zL rr@D{BhՓg9/L§3@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 |@mc  ۮ0 O )b&3dyi<3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@Ϙ %cM*ֽ]3/R;JbZF/wxEI$^#a>zFt:_\ [R1mIƏ2 {L [I-1H`? (}a@Eʒs׫ZE] A!ΏT |I6Q$b`AWlfk)"vtNٷpYJ,?RdvW3K}{Τ`ju%(c=Ց:G[?Hܵ IkSG4$h#֚:;[)\"iI߆վ+uéWF£[ lxdsu;q#[IExsu;)xZJ\)6WwzچF _NNH3OOgm6_O?a$jx5RrE#ΧY )TObE WNӓ7"1t2뿯ed6vcb2śBoRl%i[O/]x3zʻ6'gTWa)4?.ph_ԝN󮤃r:ɻr^:-iv3=ߋHK8[/Ze7D :aU_n/|4 Rq]$>n}?}E+幧mj7LZ|kx + v=;^>y7%Tlq<7: RF_Rԓx2Nit:wxOuҍ7]64[*.#`"02 . IxP~49;>v66RQ,\Ѕ6q#+/ t二|6xSדSH&?J/񲞭?kGCؽlX؇\<.t6yeSI*[Z嚂|e>MԲJnoËһZBDe-B*Dp{7KHJ9 PJhkh|BHE΅@x@µLcnϥw/=:zm8{=tzu4Zx MWv] ;}ghcp9${"$nƿ[k'S==&{=|f&!/7 WÞN)sD7 ػ\!9pxΆbu Z}WMDשtgdՉgWD6}\R.~ik}˽+E5LS2iXE)"RGZPjW=<)Ek"yw =%6f̓vDo $a}a<V~# #AjZuz[ 2\0LԹ;ehy6j|& s5`?1k~_fgZd%}jlݠ ki({q1xq񕐪t||gΖ#9t;iS|Z<ҽ|[a nMk03g{覭isp1t*Y;:M%$OxU$9f<#si[Jgi"֎B z8]ZP.Jȴ!uWږVDhw[ط3w-3q>-tYϟʦhEY!I2.\kwf6? >kӷo7ݶBTal}J%5쓪%4t5ʖ\+z>Wgcu> n.y#!y-֛?C((tv8IYi#bFѱQ%U(OYۿkBr DՓgY$LM Jz+\5Py!v!m;oCH[FMlN5QEؐQR憰GD^ V$\uJmI~%˥䶲Zn@ m9* Ǔ%FWY;;FFƋo?⶧̧_fKSW +HpE qxW4סu Y)vV/nCxu/f,Bx2Cxߌ, P{~pǿ^ l|\-EgDzǡ̸v_f}s*T0?.ml3FbE,Wn̒QkRe8q:GGpP}2ħj! -̛*Be:)uddأPRH-8xK_fs88vhr%f-&CD_`^mJ+r\~\ҏd%d'{?Ne=H IO(+y_{tM3_T?tҝWW͂ P9+i̸:8Np!;!mY^2|!=h_ϼon0bUÙz.in VH>l|[ҒAs-j#'tR8ޥ$_XQdKCȦoO-3d햛J&P)Wr/Tq7{!yW y0mE(¬5^0Mpj nmc3;4껱gg:q cO> ;ow.bkyYa.ֶT]ԟl1 k)7àhkc˕k+q1gι &K@ 3:Y` T[>|*PǹJpAʁd@ŀG.5w3<|R/% IT1L!ήKOOb ZZӋÜ^E`o|(4SKu4tB$/$ԗ᭺!*(7KMHE'P;ub9/"b<J?C:Y s* Q!,"H<*)~Ojw|+=wvCq,n3_yXWe[@A=F$;zwW\I q0׎3[+JX>| CAx%a[B(#Qd(Ƽ Yb;7!A &&TT#rRc"L )  AkΤ3%-ޜ8BzPH J?C+m0uB* ,O⪓!~*K %dSЦ r=?O&OR9(^j|S>>W&u+x;DiVh$I 8\Dcн@%!E!Ji&qƭnˉ}6#Sh- e,ֲZBkYh-J)rZZBkYh-Z+m JyYWS9/ҚOqR*e ev[Xx oa-,gʿ3B4쀥[Xx oa-,[ ߝ뜵3m!=r'ҿmVqgܷCja= &~Nޓ85M=Uv7H &DZk F@+;QA{uN/]o[qN+^sSgQ*R 8W$1a8HDČ%pXb\ hHY[y)nOqhry=zj qh'uHE˲MF9bLĦ1fڤ[zW\ mCCz`t( .Нb=$AǮǶo/8ibK7|=^ |R*.r`ELHb`Ej\* Ǖe;/;G 1'pAQ-@mpPR iS딵!aQHiR'DQ#JŘP +YG3)ץG]F6gAy%\ hBCA&E̍o:ᣬeuǥzNrFI/DjyfZPhCeǡz2|IKܚ42g>;J[ lZ|=4ѿ񒄋6G7k+ R6Wtf?nA]E5Rgmw::e3Z|_øDboZ'3poB(HB ;xٙ۫@鮝OׄWObQ,Q5Qz_hfCce Vi6&bu_ykgz:|FzʁJ|9ܧw-J}cl#G;[vdiN%tAIt7:w:dƼϯ~1M>$;(߇8YF0,рO1($ "@*/Β,ӐCvL W.&S^륻 ,F}3ǫ?us 6n%02a:NY}ἠ'+FcHƒ1wV:P 6`ߐ %L2i V9^SuLeArG!" :)V < my]FEǹJpAʁd@5$KMj' ѐX*k,Lnr]Z`=ߞJ$mJ8@@ÁaU5$юOB"CQ6HFp nJ!BSJAgjJC/h;*$>M}}?~ZGCh/E%JZHhZWS Kܾ x(|ɷHkvIa>~wmpAp i+#KIG߯$J"%ڦ([ͮwuuU?;Aۋ?_J^{Y2jg1 QsJDC:Vpa0rJc]3槥𥗖2}%k[R$3,(\-VuլjxyXXi3…=ALص--5^-bE,Z*lQFʣ7?<x?ތrRwc`]7IO,dH/G fnW4K93خNhHCf0\)ϭ &${W8}fr(cԇOK󾄉&+1PKIc /Cވ{gJjj&3AE>rc~1H# nO= 2# WP#==2jÆE|Jhi2E vhG"~*0SEH&FmerKi|e 9ؾпJ6j]S{()ޥϿ1=gy fW=֡ci]w `Ս#GkJPT4D*cT9؃E|uw-a ~}p`ڗ5)RՓLX×he=6$¤cl];?G_Q>v"]@jUyCհ /8)4"I, I8i\ '3ilJGti#UHCZEOw|Y9CUgpcf,}A݆[I'٧xajn5HGx,M,WjJ(ڏWOv` 3mD>h2i΋,FCsT2iV.Mʊ:TO$;(۲+ ciop>vUv`'Ht.kZJM4}[#[ D{+v{)?_q'k%pOr 7e_qzrvWs_~/K`!Keؽ fczX<^S3GqԥU֚$otVJOxaiV1U1hqo_":C%vJ7 1ܾ Q *I«S9^<4Iwgv%I Ÿa1@;b╜|27^Z|0ņ8)bAY>$:bVk#RnSuHUkM7[K{, 4j[;%+Kt\6RoZ#q FWᤑjZjG&_#!0f Ll JsB/+Ic6&чbCawʎ-4k.S;-}7}$WnLJfF7E^x(WḬxx|5΍ƙ"(EI>gmlf;ݵtyѾ"XҦ(tE}x?\ޭ:u+JKrq%2LI |3E 4n‡!S )' X`4.1HĨw)7Q jhIG),ںIA0WJ?Rӏ5unˋ83|YZWgf):fM`h=ZVL5$T{v*='Զ9"h6d\b>RA] oWea8M )*mo۞vM'P+|z~'01{L+8풊 Z]?J00;,шy Y5L1Q=N5)رTRqmX'\عmÊI:@NNǼ-NZ~|yL1w'Nw-kв"x\bշ^ds(o:o*"/O!oTdH!RȟBoQhAyD#[\Q6|e_OwDBe;I=S חS~=9_ ^{Y2F"ASD(t0Z ykw׆rϿەT(A5(Y~g(YO ))9#D*mt\M/qqE>z(]f"_ @)A7cT*[R1d8^@VZɄ `!P-:kE9 A`^&[ ؼ!~:2 "Ih>jSb &8…pVNSSŃ!B!vA ZqX`\hk$CTP)kLΧck m\`.`z[)bxlWcJ4傂%H.q8c%s$ւ9 Z j  Z6pi^s He5"gCq,ׂkҔb8?ɾL)쯈?2_o7ŵpϙip2ISjM7IwuC3jBwu<8D]w޾Oe[ς9('ؙVMKYRq2-gzƄEL%Qcq6&R ަJpGcۦ ҔKLGPGRL) 8Pp*uv陗}Mpɮ:j `NLKZQRK7J6 ;=έ+\<>nO핺S?_63c2z Z GPk+7I R Q1BcgH{˼1ʁcFP@~Z9|e&4rD:JXpd9=E# Xe*g(S+Y&_|ځڻRM $NXf9$(0@$OVPH+K.n`v)~<+z?CX,nEes)&!,`[$K5q)z{IӪ )}K"1cQHҫ[ A^ߛɗxWk #Xei1hzϳ/c wQJ2# ߮יg3dxAg"{ssE .Nx5ҋ1u%$ t2ry[8 lNG뎓@@@ Hbd _NN"<iʦ\Dj)j7 ::w3_>zlUS5zT9hm,7ޤ+4AQ)2/V;S^1VD{o~63|?ξ/U /Z-[ehQfa6&8o8!?br/zO.n…Ady770ADfof崷? Le!Bs<(% ω/;0CM~򷟧 +ʧY3EFlLax4Aыᅨ >-(|^x$ԮKwu5Fx*߷}}(0U—^ *k3 cE1 [_1ѝ؏X7`=)#}t M~}$`Q U-Î9̩7$6!YeybCMfV+ rdЛ`&{/6X0( ?^:4qkn:{M>JwǨnp3'X1k˞4٭[ .f.M!K!,JQ;Y?x_$B!π+?Wf]BX`Ӓǚik C F8`#AP(Hx4TbJ)huiL&u0& m^z~S L~V f]|mv=튖0Z@>E(s02K:.~+E+ۨc9dɟQ`?6,\':D QJv1+޼KQzG*p˽O|hnXT5ڮeɾt,3s7Q/Za|.-gd&+ԣZ ^EnܿIgN/3Y+TbF)͏@ m+:/3(`VS781H OtNH"w<8Le\'p#M]o9/A9L;ɱM'!b-9Rp 3<9,ˑe̹`1fY< Ҋytf6RY .יM.[&t.а+Ҏ@7ً7/pt恎(Tr.P;ֶޖOBO7hU]¶uBg|kL9bϡYf a 3%EFfK/k@٢;`סA=&ЪRNfJkLd܌T3SfԲ6(z/Mg;wȝEq!ѐ捵*hsv'҄Dɺ4i$OcXzNv=''0ׁJ/{{sWO]vD*r*pYrع]IK%VglUSwWK:ݕl1STL LŻ)QC/zx5oeY 1a=N{.ϢhR<@PN13!cG%oo-imjgxpՙA4\0E:X,"dC\'<,>onோ0ϒC*#m0w/[??BUwmE3%J0D/b/HTwۄZ:|~ӏÚM$JD% Ⓩ]$Drk)O Kt>y.9wmwWʹ]}Ec:=ݿ&ëozqBMv#WMaE/f {]4wI uHL1SXIoS\AcciI:Nhsڱ%3%5(*- F:B%$ #0oR"$2ZkReX4ũ8" N#rzPxBMcCG:E(L"bJIYMJ%IfH*$-T A-AClJOGzXr* M8:JTm0iP>ByNF=΋ JA 0PJR0ghd0%K4 srs"tAVejBGK("\VF!L8휴8D&t) & Fa,1LLr,?F]FM!ʲNZ0N%\c 5 ^ ,x"ZLFOIԻTJT,S.m9rdSYCq6`|`8FR EX]$(7qaxpH]un~eo/~{-ĒH{ips? ? J a <.%*_\+7+ZY}!4HNpE [^tP.Fs2+>? ?R*;s<=PC/e!z7,e*W}EU_W;܇1 7P~1vjdo[vbpp8A)-bh4(Naj :l̟:,Iy$*'g滗sv7~1'.ŵ̯`*wgϓp:U2m8њg FaO>4YX3Y´N5E!^q: ةB7? ? GwêDSjs"ͮta<3W^ˊa=mjKgUpg`<]+.l 1{8u̪c^^|5tX'̮OGg}ȪT~ Lә\to~j˖b!zоKYEoo/nhKWU_G{כlf0;|>E9?39N;,9/ @{1*3p>8d+L%㞖^?h:yQMY-D}u^/̑U/Ea\%YO[{yO 9uIA>Fx/xJR []0M5I1vIM6i0'B&D4 /cg7PO Xp27ὰ$f~s^LCQ6z 8&^vMu~f4t9 '=4AjEZޫVֲ' tIeixxz;y0AtheOK3'Od\Zk}ĻByn"y ʕ'C(fqT} }Ubob;X3c^biκ?{t;E ,<?|K7nKB_6bͱ9k/6'D*ǓbX>MŭTǓ:8ĺI>i<Á h0$xz\F8:SjQ̘x=A&6J"SӔmN5 'qN" ^6gSgЇ-=p f J„6՞h$Jt~^A(IDb,OFsEJ :RRc L ',; 6ܠLCf4=La0Wytef mf{ lg7%lߕķOF1N!r%8u~DX"&XQIz`A.T[O+H$rEx A2 Zc4QچO r/'\zN#+S3 Ar:#ť€`.!i#$̅s3Kܦ'Kʑv )bwxBB 3. ciç0Tqg'^ى6+6řf/Vm2GxQdwڐRx9<mgy*[ `L}D}fu&{etE!V]]Ne-'ր7?}}>nU,yrb3K*Ljd<)VψhA #(CeLDzb3mޯ +)RzZ7x#rK%bdF/cLvb2NONSS}rǭ5EB~ݠ0ˠ_NK#*0G J^B:`^e.DB8Yd+Y1 3 `9AQNc}9`AN|b{G #$$8XG!1du4(&9bc-Aˌc"M#ؒaYgV*jW&Bol+Myf<3c(}s;!r\fS!1NJd<Lα^~_hY"My5c=YR4ݖ'<[ =1F: lp6ojRz\Tkn$z0y>c=ձ5Յ귱?wDoOw7)8B7%Sư7v,~"z_ Wz~Y۫͟ї_%fl?I/p,$X,m.P ~Q[v:Y0ɔfJu&JE8poz2vwr .6J!^ؠLaL )&KKS'N8p)K4% 댒ydCPŐvց9žY\8pG8guV庠DnXOV'V' Vܺ_♶j3;:;9;jK.%$ &>:a|>;>M~3)a(phKq<%_>N뺫 Uȃ; A0\T=B]w +Un;sUdMͭ>А18Pl@2`,xƉcum.0oJ74nz?ׇ_XEzo3T f/jv7[aY+Aw?ԫ<҆'EP{=h|{xdzyk?}ᯞq.ќ9xr43_KBq c\^vX{-ώ9 ˭."iXTvGUnGz As\(w p3jg>cĜE NVF J;{6 ${?{}?-hf:_,}3l>{gL\xqrh47 3-NC K)FD· /;O8aAH@ogjs:LLVNG+tlۃN!\a!Yr~p^Fé%ƻM^9&$x Zi0! Z0xeP"4xF.6L5OS9-kcVۧޖX~37JS:of|?z2 ʈWB,zWhal8X(7LOUWm:e1~hw02`?GXp gK .T!Dbx3ͻM-C:GM@L1jvetZ^jFuLs;Ύr~)l7ɨ": 00u>wK%H|`LHxՑxkY/<_rRiGZ.=G T3鍵T`*5Fǃ"pL#`( |A@eyPgcEyGDJNcRY%) J3K@D0 (OLlw4/s(0 u}>.Xwm[c#Q+ kUeH{9#$x,_>]S݀$d@$&Zǝ`b.27%GBK9 P|R@jK+4X`ܚ(p$Ս!,bcG`bőBxA;#}Y]!]Y #", n #h' p=V~222 bQo)>J.+zy<٧s? /KY[MKX?sc9}/-((Lڀ2W.tpu4Fldkm<1ן{U$=`:kV]/WdJc3)bPa}]C :ZS~ʱqo`_b[71z ~_׭Q2PuP7=7ԍ{ss?W׿I P/)C/ UI2]1E KGD01RLݕ)`WU97_  85!6?kX]E9w?@o(DN+eadL%A籩 Eբz?Z؛MB<Iҷ7jĻIu RJJZ:Umg8KAnWqC7f &o@1TLi g%*s=_y»HҲ ˽}e~7?ᏽ;8`gCW#4>-zI^$=IkVN i}"*s _;NDCQ q1 K 1ϷKt.`2 6[ 8HQbFnNb$r4"͑N"m#iz?Ēv}gB@{Yg׃77wM)z*oe Ls&Y rg.}rٙk㳣xħ,%L 6(0EA3`uH"|Ze:SP"}*JOMJĐ:P9 LC)Fe.tJ(O(7j<ۤudXDcZ)"VNHDc:vJ.(Q#m|O۷c0wonʉXz#ϙ@wz0@S&<D@iS*Z(miqAG1e֘8ǫ1qhQоt'%q`ÂZc|3xϱD1()1#PN;eK*Ljd<)VCuɈhA #(-eLDcYwvLXd|?bf`ǓSӉzAYm-֛mq= t@(9ԑyE(LH.q0TdfS;y-Xhm ҺKetۨ`p'PEL;gOFil[4?fub^}zv"{g,uYBŬe_ *#_DnL"QJ :;'ُ~{'8A?mR+ 2}rW1Ŭ-8S~{\rQw9~ZpϠo7'Lf6~rϒ=AhW{mӷ2e9:0 ,&"qsI^f~ӲBJxLov+ kٳ-A ZlLBKl4{@d\\ sųmAqbPlt1Qaj]Wf'mˤvMv{a=:ّB%L7(ʑӬ.I˭[ۦ6;.wiS<ˆG7cq{CsR,/|uh6,\st&RG 2 Pjxϫ;l񻁷bJ)k1QeudyÍf渹aue[`kmH_evq/ MbVtI,+[=CR,J8Hk X9͞_=:cj6gtށg(jGNx;rt}ruMYx#v 1 |9ƃե& ӐD}sߘc&k~w~#mz[6Bɹ2OT{~͇!]>j}js!lv!usy.} B)ZvϹ7dU5OpR|7~Rc_qrhWXj/pP}tjff7ػ{KDU`CReHч8rA(?9̉6Aҏ\vytn?~c"^!PH% ė`8 f9dI <'m ٨ 궡3N\Pg*[I+^~KE!ܘRZ:!.9z[܉iűd %V%Y nU3ScdOe}9\Щ&կ)]yi6eIF2) dso0i%i">.vůe ˸켨5bỿr5sn_v@d:u@[ Da> &CR dK&fe1hl:~$,+m| Vq PJ)Q!aJȢ(lԴbF|HY% Ǣ ‰X/c_9]ֆd@"H!h5jn$ 0QQ݅qKZ/iO)i{-aO $I̍ת n lmu#}ˤ.HV:}v`"(89ߺmmƂV>lr3F%[Xz3,/KE Ej w=gvzOyylyTRqR& fYN@!r,MY,NiT#sF'pޒL(,I0p u uB $xN%8h9,|Wk]t<*ĚVZ(xe];2k^˵9m)7)lo@$1A1S`xEҪ8zQa:#fsY{$:}Po(EeNQ%sci:KZusCTEȞ?'{<6 E{#lp}6 &sެ:G_l|Us'h _9rk 7.`)\'˙~DqcGnm-M݈h=ȄN[CM[zoQy !δO[嬓 %9܉yStOq#zu}unxZ]qK-CT2I\I;OXRGEGڪ@rq:d[PK! B6 ھ>-$pgw7Y'eܛ.zGڇ5} #2d .AB霹,ǸVґ% szp:(s epinL߱Zv>LX/ݨU3W .VoX}ݛ[W߾_Gh )YXqs*v.̶)w'=?ĈyX|Sk,k;] BlYe 7nz/P,޼vm=#1'T+o2_LKQ\Ɠ˭$?܎\g\ߟv;j}Zjhd;*4'CITůJT٧oAc#AEA Eھm:ѵ$t>:\?E^.7 A׮6 mE7̵s ۏoi6ת|0Sr7XjQ gѴUA6 U aEX]'KR&O'?՗,k{ѭ֟&| H&Xkk@/ 5goT m R?۝Tx̍6Nʒ\d3r"*=UZ, @#K tp&##:n4xį'(!TWM%WPNۣw L IR%J)3L"@KO69`1,ۉ΍ͯAv ͟񇬩:Ije^5Ꝺ qc̾I'xDK|KM#qsr2hzj)e;];r]ω$޸miu\wCr^'C=2A\&"lurRB }ܭ oݢq8\lel{BH%:'\'"x By*#2 L7D” d|VOvpV9|\TvSw\;tl*tRg4Ip獎GsWEeldlD /\/ kOx8ނ'owrJ:wy!E2X]\ċ.Gup.P:Xϲ!th}a}~b]]sj>,:rjnAT)IN(8 jvZ+-ҹ]F"O,αR<@|`dSL(.PQwD["#Jhm]'uFȒ4t}MCPޜS ZKMg!24Uk'ux| &0$,J/G- O}vJ%O:;=Kk.D{11to ?5,w;g NXI cH+4:XL?NF]0Jƴ&26:je6pR!8t_YJ(9q=w*h ,·;d5 [>ofbϷ{›O]+"$m:3_춻.!Ac.&ZHMk<]s4g{MWF4rњziZ/f|sceWFWg^=&C莋g,ӯӹQN Sv=xISh."Hz."Hz34ϔo9BՐE?M7gy5kak]WMMY$gJ۵gs;kɢ2o16Noޅfj/L1wp`gi߃tn2lKz_5 38ſ}o`_ͨM1V.Qn<ًj~ $F٧=@cJ&E؍Taw0.J傶x,-r˅r!\-r˅\H>\- |n[. Br!\8~p-rr!\-r˅r!\-2b2)535s \<05s \<05s \<05̏ ˜>z=/_zHۑ+q!mExZbe!mG%VUoQzHO=1LK8L0+'Q?vp^r-# $𥖜N)bxЄ_GסEr\a`m%>%֊++)bJx$PIfJ,s)/\MRTbb,(U2xi23Ńd,ov`)K4x֥F. J^(H( 1ͅv ΑYBNhrʌ{x&~?xUaL[k-Ka2P}5a iv8fttt"zIb4}0.Y;D^rÜ%Q [!%iC D ~iH_ l荰lĆG!MX`p-HO "/v! dO:%HwD싗phlg0V}D{$fs$%x_ݧw V`􀀖O~iHԴt𒩀J-NZ48à֟O4xbyZ 6,H57K#*^zKZE%k]SVgٖ= uϻȣ%{$֮K_˚wh˲d9IE*LS {%dlg\!Y0`O0<,-ruqdBBloX&j^~lZXzSՅItQ[ԮasϮ'pVms";]$ӋNL f-wJZn)5n8\-¦T\9S_ˆVrcq{MWw\o:cgA LҴdgmTXuB5~31eT L&7obk*-DD$]tf64.7<+CQU^q8v8 6 Si{]IО Ngz0oҞ& S"ՄAV:cS&NxLf .7&=[ުR/ Uצ=I%[*ugCtꤓ 8x7R=a)۾4s2~I)tVιFt^AhDAjsRxAhTCaEDzdUDS6j6pt]{Y?#6U˓]esի&GkQ/7J(^mD߂{m6-x}3#1~gF5ߘfp6$fq}U&o|,69SvSmi}BNDvCHoyq1@0sӉT0,Z-llt2-t#l ֣cv +n1[FU[B16B: I,WQc b1uLHHH9E(-m -,ඈREo> ̂TjG)J tu] +r֋h/w=p`g|k8H8uݾڢթVnT(G hS\-&x#32xiD ,][ͳ{&|N Esr}妕V~sAi]WMh&+PQk %I/P!`V0/2GJGt`ZD"rRu$,r!c)f  D9#&Ҙ.$I@,XHƴ (QHLd$&A:łKsg Hǯ k 1gI˒%퐒%BfDVQkW>EScu{N;R\yt߇4q! {_OpñO3>%bK՟og%o?CуC 7^/fNwC?8G.#/8?~ ~XjgKQCP\Ԕ Dڸ:>"g~Ef 5@m0Js$tzk |7[sպax6Es陟jE;{lfgYH܎U|/w?MmVDP{\I^v<'u4~<><7lf] &(zU"\ɢwT9*5S`{S0E%'0+}K6J}kz6_ak,6Ւ4Sǂs8[J)y R'mR7"DB i)<ӼۆHXG"\JlB Oc*5( Z^jFu/9tTZ$U]H\ FzNwܫEϐx?mmGw4`|ܤ$iS!Τ7JCS8֊BD&FRkd <|@UJEZRv'@V;"E RrȺ̰4N+L; B' Lߴym+5C a0|AoWX7am[ӈ1 `{ fsA2l`NB<.sKIȀ(H`DjwIR4\d40o4 -H*2 u+H4X@ܚ(p$>HW1 # rgP! O%o buY%>Lj2*.܍wLYCa0)L /?Lm(+y|3kc//cIJ){O‹q},!պzpj9Mkrԗ ۇ.滚a~Q^0~ZHMզzauu>\$nY>2ovc(g}5-gAc.O+8r33Δ"*s _,\'xQq1HRl3r{ihlZ|t;M40m"4Y 8HQ0TJ7Y'1nV@Hs'*JO qip]i՚ ))ls9xH'U&C5YζthX*nCuy?-x ,,fdrHxJ%V 'ٴj_~f}Q:vX.^CKv?Frp)BSϘ8P=d:FY^1T'@: fۤ舛x<ŦZLG;@0gذJ [5gwGVpPѤ\~n ~Vx瘐KU8Z0  Eli\Nw޾?}hx˵#·ٿ97g0=Wͽ&c{`1X9?)2@R=y_jY"M:"};7L:`k U 㴉+t@Hm7g)Oc.*541V3jWIƪHr13H=kJ3:!V+"Lm9܄js ӵMvkmifJlPIn1KmEAҊgZGB̧X~X$^[nd~T{˒WgO5IɲFl1F&[dY_UWW,p S6irq\0%y@|pF0,ƀ:ؐ zDD{"TR LBFB+m~7z'5Km3ug葔m3 er%ۭ:_qpx/DGK$-Q2!! (=mmkrٯY)fU`U:t&C1ȏ=K/9$*/KzҰ޾\Y\B4)ꂦkH6euE\=A}s^a=9'KuIg8a*@=ƘJNT鍜ݰ~] Gey3]P||u+f䏄7 kB {E`&嫩rE^Ke}VHsJPY Vg*{vGOk>"vLJq?fƜ6^2'Ka&/A{!,tp-i6sؗ5sؗC ؗ ,2/AYBd p7쬵4&ߜq1JIJ4i\dD"c\^ L0c$ 2 H,Q] rŜ` j"Q=z#fޅm◻ ',J~ѧ~p i.XvlƬlky~^n2:΀(x&< k' Ie?WQZ!z`LxgXJ:8QAqBq=Kpo8t:O]8^|r{Lc>EۯljC׶riR)Awtyȩ%eΕ"TڄA}<=?8*.2+5[W$6>XL|E?zipx0|~_OHǕ{G~{Յfy}1n|s5]Ezkwlf~oZV୮nb*0+lAnNRnV)5|􃀬 ~Q qX>xYp˴:(³e{)4Ke `y<\ 퐫# /,[[aya`Z>N6zI6nq`'ٓCen}} piԚ:\h8AxgA2%LZGmiDlo<2GbR G{$TD ڱ ƍS$x@".%1;d! |@ƷU^_S2K0q ֒ an\np"+dFSs',O,q=UkM.T7]a} [G#7|Y½A! jVT\`ƒF&l`u7 iz ) $)AY>Ƞ9"1"$ADgQDY$ڠ8RObR#$EAc x"'DS#JEŘPޜƤ|\j,n.*1U/~r8k4r+*]Nr>|Yɏ_]V[s|ڴGo.^>o_H͚'jT++wpX\]t~}ӄ;sBJ6egyY?}xæ–6 C~ѿTE%X+] + v4MpiY1Ku'VH0?6]ʃn\i5!4qI_(!9J'zUu%CUWm-npQđpS}__"7o@/H>qJNȦD_g߬5#aPN 9ԊsÙ*㻳jF ƶl-79R@x9UhT\%_p"GCJɼf /__.wme6<|jCQV9bU݃ZnWx~ʃI{fњj!hey_6 w1 ^YY[ǜa4.,6{rpS-^M9_ޯ:/.2CjYmyER|Dv 6^XW_<:qUг=d-уG##7ǩfA?t-cQ O,@C~DQ-~@n/-MYV@S[tb|\kg@:%|Д(y#pRCHI'ֳzm=k|yՙ]9b0$U'(e2!Of2c}z|Ķф?L!߿ U& ݳLUW?m G.7/8?Q+<Q-Ex]4&B6R> gQ֫.wߵ xP zp&s$09{lc4Q:<"Z7.CMw\=sE-zfQN~oU&~Cmkn2@n,3~B<+ZOZ%z[|j:Evg[>ԦFz!/W겇0@佝%PJ-LLd\5φ!y8[҃3KOwd}Rvfw3A}PsŁMtBtR#"*(QD@NJx4$h⓳<Nz[L\[Žy q"UFҤ㸌?Zϡ]7ztJYדkdNqb,#Q х@`ۭ:q6=${&cgrGw/bW~o^%e J$Br/.$>$9DB0o/ۢoAvOS;V\׎7 ttUwg#7𕵖8rQM2:hY{Ws_ gUi]LeV.EDB7~H_v@sJ6h,@{Ђ୍c!D9o'v@4_~g'U^1v@ZGBZ imm)|\e̒2': aӑxEUs0uxQgFv^g皓!L\OqF* ԀUzetGᄌ>pJF *(DN ;L5e{=6- L< M'tFIؔđ /lxrȞrE˛(GsI%#33PQdA:,IEnfʻ@A9oA@NPyʡ^ KC$EcGy7r|3݉ o|>Hጱ):f4JE2˨W=a6PШQ yt1 ftr.Y%vHh+^ʽj(w2ƨR"LgZFB=B!w;`g&ik"KQ9$,JLYRBq,լnVUJDD@H,vEGtJ69Q94%=gml@[A;D@!J܁~TT;I2)oZ&"un9KS09ʠ(n_X䌝wmir>N̓hiS H*Gp@y%BSRTP:c A*RB4!ny&v}9D0qA\ zMRr)5Q+عDzkJ,!+ 2v1*+.NԮ2gtvQWkʗ~o5xr!{쁯Y9KUV#e9#;j?Se#q%R$U@Ԓfa9PT{V/ ﶃ$w# R9+')Qhӓ1@Z1EFXn۬4LS^5ڍ]KW*>mꃮ~w{U{w4/aY);meM=g}2k_<9L0g0cc^ۂy;kA3|%gS4ay"L y~_τu_Gȗ!&oa ].SLk|֥R`߷8.4hpQa}4H5[seKl ,E% &0^R*g er08XARL:,\=mU%Zꭱº4ydCPŐrGBaϬVi.\<qNL WRqWy` 뀧ޘț݃~la}S^ ivqqѩБf\P@bBD^J@sHTctNBrt|cn0ծxl87-Xp^rÜ9Q H&9 ,ӝ N_ۂӯfl荰lĆG!MX`p-HO p^FBJTjZ,8}8 +ۃ-mhOdm^;K4"#d;3V:]sFo6#e4w𜩀r͕NZ4ոA]6.ڙfC#Xy &7Fk}dQ{%GIIWpD-AD rH# be=E=KFDD b FрG`m)c"-#3kȈ6s=X{sMaC{)6c󹹛TmV2#&Ի-l 0%::c"$q0(e^5zk__'sK1{r1}@3ld͊k.ˇ7 nOwhFqcڋշ.Fg8 ՆɼvyiPشhYL|*0+NnRnd =!"}:GX{)) JnHb΂wDðq I |' 5geOos8|[)ZbwifR[50 'H XkAႱ>r` iCž8:d+10BB23uS/IN Y$53ꝷ!c .N:I$혒vv\]*jUuNpXn^T ww kZ30=W^c!(z#\΢w{*(x1G@G JvE;+u鏱styl5D!˱ΥÖR"ajH $T H PCZ 4ox/Q0!S&aJ;f2 A@X/5^#Zހm=c$|~.6J% 3_瑩n\WvBz@tLf㪾7OBpSf V'B3!SffOᡬb=TMI1Y\rhO401Bd1d\z1KXܞ7+{eG\hC_9$~N䃺l|_1L_bu}J)&ƅ4ϲ_ q8vf\}D>[EK*?f37(Tso^N4,E]a^r>"\MյbLQ[;|*f^{@0iC(XR.Ѡ^frtA5+}LZ"f%S`a2:6z2Srb^)E?rlg"qF5{;<{e`\_)$j+`=X`tG}L4"Dho-(#QFR:0&p˵D4i>!zv_|gAg7w6WctڠSˡ 8QP%dNA1Mj2a}IzS@ KO_xS~HM޽?EnӇ aݠc:6 ej߽Z QI2(@.ʡD,m$g"EDͻ+w7ozPF9mкrRMbZνtpofKPkեʥ;:w~,ltTTqkWնfF-{Wpc؂t4xz k?lh0S'ҥ:0W2v $[1t3Ŝ9s˭#)XTgC۾EPsΖ}fNuHC&C+&O#O@n̏"Y f`dnղ[rߢ>lYd$.MV-4%3K%Mˬ'>]4P4!x!ޣӅ^]Ǽ"s>qmC>{^՗6jP.n]E._o~]wיIߣ??n=@hŽ,2(ߓޡd]‚aݣܛ J]g t~`bޠޣrcexr]*tV -m'^E;t Ϧ`m&E+qU*H SEA o ioA3Xbm~yM\Qdr;PCO}r>a 輚T*MU%C. cLB[/7Ё0f%昊:Y48o]ow/fF0zV_z1u+ybWsv77Ƣ{7t.<ŗizn)cf\wf-@:v|E6b n 7T!ǰ)!hD)1`B/|2Ȣcv'+4%H-y n1o Eϣ+#tց3os5[|lRo5U)T^t5ѹPE0K9\<{\!dqFr5Ӭ\s7~8KP$VQ2$iOEF  I 7Jd"\L۠|)Zͯ*}d*'+8:85ܒv̡*D(J4~vLM_qo"IY_y??ѫ.|富] CS)6-)!bҟHJB2Om%R\n t,LalE2$m,%v"PXPˊ,! LxQf?^ݢi,Se4ya |)p W K}ݭpڟ 8{- O(%Hie6,c`ě#h1 -dK!ف͒fؓL$^ ^#>"x^%9!ӨDf-IuXJJt9nGP a-dPQ`Sxr9[S lsRGJ!F/WAyd % ouQpv|2_ Xj@Lf%zg/x9Gwed3z5]>͗Gw>jsfVAjRά 얮]_]+S 7mZO ^:uס]^?'Jxm[os<}-e/ٜu-;n-un7Mo|ߡ畖aru3y^>u翺c Ӽ*qWzë{\鮯{nR{sn~ނ|:Tn~ nn$gn8[).G>Pm( mbgqŀ&,cP%xt *X|75y\»^ {VT899"$Ut7xe.h}:>+mё`R򟂬 6)Cx(J(0isI }Vp75ldrPI 5"H Ex\É͆EEMf[d=v Ebk=HiHc{[S]!*HEKKONUQQQȄ@pa5@(H^ gPJ7-fX?~;Ʃwt3OAFakHҙ@Jȫ.Y)O0{ MoKE$-xZI˰|K8c]HuJ`tvLE*+LwVvB D*Db֑]-.nZ`)1@.f ]`MvZKK-5_o= MUQqS}lTDD}*J  zMmq<J-HGttaopjyꝣ3SSxg9o& E{F&t4R[Aҽ٭"q@GAGR 6yVQ:OEmGvjRV6CM)82wy}SwiFIЀ3B;ɪ6~`poapoߍc)BF ɠ].DJ+CPdjuʑVLS,dU뽥7M2w/@ <N.f(@Mp`jbj W+6S BK|mB%Rt00K\Ġy*T4z) 、S@d n,T=!*PQ1r` %Fai-RyPg$maEpvT^Đ:fٿ@|t;bOo{vc(s38* _n>۬(ƽ'4;~FiaRDhxHQ)y&1ȶ&MҔ1x C0{x)% S$;UI"j8=Y q,} r͟~^101K@ ίWI3=ۛEB% "J 3Jy69#y|<)՝*տ4FWuA\ù37{aE >t6Ϧ>|#M썔ϞI\24^40YL?^(27?{l~驼SnzMwfe-fh"H*/=9aWʚKtQȔ*dH ~pu9VQ)ab'W{^2D" 6kvKY3=x398s5f=xh_رxgx >ĖoL=&b۩[AhKm /.K>Fm5.¢n*V\oҖs>Zu_xF{/W[ ~>nݸti z\UJXmt)04M~o ρ[bB*͗cY6^[PI¶תN_c 5ijT{7*/mG?k7W`zk=/ENHJZVٟjDtwĉ[kX{J#gtf%sUHk8㒌Z[Q&TIUHUz:U9UEir1򦭞1im 9Kip]J"Ccyc›}Grfq82JpbtAQ J!/s9 e&Hq1$/_uK/*3.WbII9sFɢ=I#r`GVv|{? "\q Yw1$i(1J)('兎V&f[ T b>eRw4X{mM9uYh$ RIQg <;tF0C&oC2:kœ<##nXVbю0()cfX^(ш%تs*cv tTw?IAy>Q 0>κOk/x6ͣM//5>_t<>t;KN_tq=˟g5&8ɟgYgJHBSmC"!-}MgcAVw;GnTLS49Ln|dxb`$-HQPZvjhlP)>\t6YxmS* IRj2J"X2F"2z4_TGcC]d/|Q>:'.t:wgX}pVWoσ`ov^T`%I̖.LMȰksa{ *e?[&6 Xz!VQ÷P[z)ݵ?^LH}B#-CQ$Toш" ? ? jx7+<6j;.Soۨ 䢨b#PF@93S-yڞJ/|?E\\['ejwzȳI M JPkn;?UwX:;K /jzSyxPT?fUGFW"f嬺7q<#5ǝ'ˇ;- ={7狙3 } 6̜$cqk$26$1M ^Z_wuڶHȭu]ù[$Z5!9 8hMĒ\To$ց}2\:sf6wp֬;[h,×mk0dVrpjWQ$89 0Nh[\uG\'\Y߄*ٹΞ!L,5h@o{mM2|\XJG!ubmOOwc׿ڨe#~TthLhThFqΫ֠z*lcmJc-Za7)36^ bwQ/7VHn-vh%rO& 9ҔVT:dK˚-q2r=VLQ'AZk8nXa=1Pp\JcFӄѰZ.L4HFsIR텖*=UPS0OəA!f|dmJrs/_BVR.GRBRG#6HN)Vp03'"\$\Og7XPWCh ':FTtirNv1`MJ&CcI,@SRy\.S(1D"dl*3L&m$԰Ȅ"*'OM.6% I%8b=w6֠YݍΠx&w 7 :ΒEV~zOM2 3&!"I:Υ;5r|Z*!0HiOIĉ-q0LP&hRD!:1ʬRͦ2蘰 -MĉL`d //مKvq]d/a$(b I@1h}ʇ )RHa(T1HTc'o3҄C0@K`'ax9{:.+|I.Ƀ,tA8!!;z/3&.ӷ=&%n,LN?i9g6czhR>Dn%1lBۧγ~ }oQ`4hT>=!ѧ>?AZ;d}rkmH~5EkSD_KњL-^&St[aLOUS>?u>Z uFBfJn> uxF1_CwR(ˉQmnJ(L. aZA.]f*i;TN+H]TQW@.ZUKWWJZu lKGNث*>!δACH_.Rq-6S++LlhŌq˂s1 왯Yխo;|Za}QV[JOo~3Nf^N}@^_oqNU1aV!}3vEjzp&^ Ԋ/TUQM΍9t%fԝV}  z=gS\~58S{lom҈V߾C}k8&L]29yza&WkQW@-#U;ߏR{zu䘝K]A%s4 9h3Q#5"ʭhswV0/A(Bf/m^VgxjѻlJ`mԯTƏeȺ N^4yBj8-r ETKK~[B!_7ɾ^;cY'u߭\|.Fm| ,5 7[hdE!}'w嬕pĵ#' }֧}Ma6 dON7Otgl6n-:::A>:>uj~^%LWU? 2cfFA1o JXDTkH'ǎւ{gNeS[pd\L%=IJG$+b=6GnA2 T2>@.ɂ3YpuQgfIX?=؀HLFMd)u~a u#q?DAOjv!@\Gt irfH ڂmA -Iu"JXmo17m(IQ{L4ErX39U\$# PDrf&lFgAϱ{(9]عqkMeCz)=e>N1 N h <L)F,Eb>hF')$4 ] sxZٞ䴳FFCr51KldhUb˭,QR% Tctئ-kqw ǹglvj-'z/ٮh-3YZa~mv n+6Fٸ=<:X(yۅ$5zMq 8( kO8!Vv(b.ok^@,Nޒ_?j|b|S7:s< SSqO`ַ|Gs%nԗuP}3rh^}=F=:晎n؏>JG4ns]fr^;ZN*FY*0 Vn.RnIW1@ŹK=5 JJa)H#c&ˉ |+ t7Cok8Mgx[?[)Z0bwO/lY `黻rY.KJl eJYZ8PCLiK{5tQR=ilhmBe.ԍQ[-wIZnjÚǮ8^Ԫr~9Gxői zX?b{rzx -^ 6dr"S^h f{o؈äQ(|(1& e9zni;A2bI2t2hU T @0co:2AVv4f EF]1@F. F)"(cMpf4Zokbf7*nBW҇\*ߍ~UodVO)ьƥNo]Y>zw|pMi=5,ū.h,8߾Q9]L XvV/M!uQ%l}h7qGtƮzyܒVcU)Gӫ;H.u/eyHC:]D=i~YYx) ډAT,Qǿ[^2Vu\e7/80ON]Iu(N(W?T"l~xzPgz3=86Y @ yt!NQ T: }ٜ7G45GI>BHBP TN&)Vkȳ뒕~Z&)[-ƏgZlx\"Eۢ]on~[ؓǘx>UNђҿLwm`9笕8fiy(}ZʳɥS/̰ }dѼaOzѤTّ9`fE5K-+\w2ǿ޳}Fv69MAϣ.^<A={JeKBJ%AExac LȪ[W0%܉W):]Y;=m;]*ķ\ =`9 1`[z͢rMtvy~ lkߜssA.&gC$|uMh )'6{|y6k؁'%v!d d)堭1N2b@*XKEe^ʳ%JMPeH{qn|QRrWi2sE^P}%h4{ߴyr>^M6`JCyN%հ}rl.yOz1w6Y7ww|`kq*ݽݽt{%5٠ޝE9~? <*Alxp RPMT24y3/wm#44c" mK`bL(DHLMΘ7(\CJ h߬/=en&w&J9\M񇗾C9KE*kI5'֊0Bd*dɸ)QQ@Ax;%zdr4!z+' ;(vy7<|N'ulrޫ OɢR> Sq&{) ݍ#{}nQTmlJHKYsjz l6>n&&}*&;H/9 Jl`%ml*WҽJ3=W]hޟsεlu .E$kuDolE+@SܩKΗ }W33)/N>#oJMs1c@c k)(U:vuF.^Knk1s/vyip0{KeNF{,:+U jq@2$:;DZRiDo1ѡ8<o47|QZcMQImFଐTЗۚGI֋\ zfGLJx[7ܺm,Z?=OdD6OnǼ O'esW8(,2Fk g.SsK瘋.s}D_uls5w@W*Je_^g}RD.+ynT]J=u*{uUTjPWoP]iRWLE0ꪒk졨+& 黺b*࠮ޠ2ƁCBW\E]Uj]STWV­`\y7!^r,Z~2vW#<-}q8DF%矾zt9/f1/w]T`#SBmwgEG0q5I>bY8._օNh ;Y^#bf~y1. @Ըv(d`j-Sk}R#l+ӏ'pG muBuܾY?oJ޾57쳬&MZdcSO6Dn$d%"Ebw?|۔c­^%W >cJׇb1^B-9kA{h8 uU ևxq(R{RoQ]yk_t``U%塨J=T! QWn˥wGazu%gCbzdM/OBW=܋yXM-Ekg`R=;<cs_׬1Djv훯%4l j *M"I$eF˒m߈}쎋^ y[¸CYs1?V);}wK{O UH%.o\4 x)j1)50y]ӭ]^EG`" H" Ĝ1)oP"*$ěhM8vN+և\"lxKPNQEzFhc̽5gRavɑqy.`g Aj˒J*7XevV\̈aQzHI蔷}E/l~7bd:7x]+߻vGJw)5G)=¿rBb+P9!4$0!"T"AN87WyrB JQ,gpԽ}جgsYǦt5̝8&Luoz8l&Pv||J*LJrE55"c QI0>gV*\ 1+eW*\w&4AۖDz^Q1e1h/5HBE׎A0Q0n$XAFUh<c~ ƋAݬ0aѯA1%OAhô:iCX!5:8@Tb`YeIl_͵ve=D xR10KW4({AmަA6pނL( $"VJ) GMR}A ͽ'"1"$gQDY VmP tTV'[XZq[aQH4F"ੑq@1&X;70)7$]j$mE% ްGhA^wz e𯯺ճBn'h_{3_t|j"5&IBijl!sÃ>oG/g&h%)AY=&aۍ KɌ5_k$΢Y7{fx`b}E95s .N{SDݽKw!7 Nٟeq0 n3~QB[Ԩv5J*[|ݠ7 ?|_0Ef1ѝ+9"uQ`aQB9a7crs甇Wo).r7q`FZ.*Dk>\%_q"(D6J!l6<]P4-V%˗]B<Y7w^1io1ZM@ ,_۲zw3}vߧ9LeE|xs:ۣ(|Gi)UpMe%Y~շO|YZVw}/km/_{8<F7 fC˩c2|j&q,ڙ(>hJr44q}p?5BzN^O3Q妗Z=TqRгOdlCR taolo?d& )܁r$DCk.*)Z)CpY NDH*-zᚐc%x[teۇ\6|-&}[dU *ÁO/,Z<#Z ɱa"ud^0RWLTfӶJ=`ޅG.]sb.] ' ?nR]&q꺲 # .ɏj+jUւ"} B2 TCdeiC`0:'qe];Yz8 CdqX̪ml *&̒**!BէuϭYt=.Ig[VزKW=4p@+-wd4n_0ve/~=&o[떢EC$ɜj[:{N:ϭCvW=7o.$|T;myPΈVHP=q3MI\(mWCx'S$93PCQ:tIPF3]/2d J\3u5m4% b4HξBo_(HvwǣA?<_i2<ɓLD !16c&H$/$Sh3ٜW%6D/54z4PpQ:z 9yIaF'e\@Á:y!i$w]%fΖY:95~Ǡ[q&ʤ+\BXni?;xR8eTV:]bB &)Ѥ9sy &qYvS 9(% i!Sͥ`*QfsEg8@Tp lN x2#0|};aQm3N ДŘYa%}اݿB)HLnau=q0+ӛNoRo tL( 0K-h͙DULIPK<՞J!p.zNA6?s۞d#?5p@]UV-l?Mr"츩Ꮦ uף 4sfv)só4En/XRR<8uE]J,;y)7ҰpAZmKظF t&+,i)I7[/#,a/ߕ8Ǵ215J0_>V)]o]8-t!;Yy@gj.XM~_c%1b4` >d2԰7vyY|BzIn~v3!~ۣ=߆w$I_AiW7e#ei;V'V(6nE2 hhȬ_Dcϴ{A* Ab2},<  fCmFg=vJ;Hu"HhjnBh>7 6WRaN-uX,%;,dx߹zXrwՊXiA| .Fm|| MEkǂ8'&gRk Z≯W^jCɀb89TrcIxOa i'ͱCG'BG"if\8)`0w-@odX uXA:AC y-Hd^WD1E4+cZ*&0`eA"rG !" * q.@.b VQ9"<49!Ѻ:'HdDSzvu )v$w"yϩ4\PSL\.)%ӊoJkJRrfVU&XŨL2ZdTN]EuFWUWH0pz1* RUVsWWJթ1D RWH0bU&WKQWH-sWWJ;uՕ"|W`I.F]eru5Օ" [TW Z:X5BﭞԬ\ԁ57wKG ձl>:`7?_y}o2Ά\>;+O{DeT.mg ԕ SF\ _7KID*?SǑzK;&i9E$ "9!2P$pFTɀwI' [NNrPh@ZP^"?hX|un wkch6sKz/!0%v]Rk%vI]RkzY慙/@GwI$Z.KjZַ}$l̥6ޏk b*Լڊt$N8k5VޝWkP|;-#DQ|`B!oVU$%g60ii"֝"D %p) bTdL,h5T֖62Bl{ Ox2xFqS;xTnwJoVhCa?Nfc=ﵡQ^@thfoW{gw (M~erσ=A2kWơrbT9fIurY U!143xn(82Gb% zֆ u\oZxOS8mdD`rVx2Y)OLja>Om4g!\\V f'5˺>`uBBd+^^ݘXzݫm'oۈzі6l/ ݶ e%qisrk.Nm:q$eT>x7T)>⊖G bcXM}8lXntφeG&)WpagB\ DŽ݄bJ)gly{gn4ߎVx1 6LDC_z{h^?9P. E݉%I 7 Of/LtQN&fB9-sM"!8.ZmLM񜘜|Bk>N? oT{8|Ƽp;oir1svZMg:~R1_^w D2O~ICcww1ۂC_oRO8{{wu˹?ǏƏE/>܀J?+o/kr{Q4NSQ).x뜡2Dy &g GrLH_}/6LjLp| 't2+ d)ZQ;c1yr\ ڭQB9.h-D<ʍʥ+*N)킫m]D0IW$>?yk^Ùq<7-?ON' U}8[Ϊ+U9gGן_JCZy{MF" 9 P8u`=OH?B< (ҠZՌRBr(U&C┡^{-THƱ`c)UI`*5ckl׌ϣ8c_]ZօӅGY6L?! ه)SHr5Oː?jK%8 A1TBsh$S&Aۮ95 Òqy.Z5Wk떵v`PUIHeӂ(yTB$^ȴP}օ$hFem>dU3"!pjT܏R$eTG #nϜ6dN:I;]A(*ß.Z nY邕ND%ԅ;,tFr҅;S+wT*u~])Z }{;EdX!Ab:hl;W$Y Q 5"Vtt ڵ*8^mKq*rZ Z%DSstr6@t:\N& T(Ӑ#m)(5xUm\Jw4=ٳua8z6Dd~^f d(,Yx˨\0Ԓ 9{P[: .m+J; D hʎPuAé: !U; -@ Lq>0HrQ)<1 ǜTjdEWNp0:z.|BIDL` ܙ!LD$4BhSpց8\yOzم2{}^eOKbsAz-vI) <&_t- p4 'tqf I S[4"t';,xxA9FB%8E2xQ;7!Fz3f2'9ZLG\R{%C\sAQ_Ge,s.<ژ$; ^m.J\t>Qc $R;mExk-."z6ѐ< ǽie~bWmgߜȾшh͜2g1o9ԞAUH |)¨k#pwIk|qsKdġ%m5ڏQף?ʼD,lTAS( -KJ+wP>!jo{bMٹ>SGˠYvH=}r fis,A_|dxbp$-%*eur{mhlJ|t5my8LVkp >FLУbvNm}DBDnـ%@3Y$S"^>zpR^AN4*A "e4iNKUbtJ H$T+tmv S/O_k!!6Aퟺ',MbNEãݮn9[t=)y5 yd#Wv5Ó/7yc~}̷v5Y9: ;9ЧRerx7Vo密[<*5c?\?ȭ6ז28\PrDJ oD`GN(*Kg ŁӅ.GxUd Q!dP`Kœ >yd蔍!IuAKD9I8Tɇl5#x%! #ٷ5* M&b>=OJ^9O#0Fg_I9J:*`T9OB(eoCq goQ$-L2 @B{4ӦaPL+w ԛ+hM$q5(U:,*t%PL.=O8P+p𒊭G(fqQRl]nUJ#bWXl  2W,Ř+B(RUv]@Uh^2ڂ/\^1WU\0/\UiQ\0zʂҼ sUvŘ*.ڗb,xlmxRj1Wh]Sm6ٴ& KUn^&o7bum[bS "aΌ>zW&?r~ |5|sǘFFD"p& _|e ֽ +r|*y1>pЭ4Kih_uL"r~4Y> սfiX4Y̦mx\̧ZHۓAJ} [ Ds.7ԵaפkRS; 1o&e2Joᨖ ΍w;{˦ ! $|MMQF*:‡&y.+L0;fh]1jڥ]m>;#SMem@xgI* yzpVԹk o?8_L|Q(I6׉obePѷ듮۲tQ rzͯ}SdlPXlnVajQyG.4 W}Nj" J@R $r(H1 I{ $&O9$X`_>9h2Wd2V>EirҪ!'כ8sS&^·E#T{}H}kSxɦ67 ?n ԕ wW%EQ:eh@E_cLBkoD #Y,b*2jBiܼlea2V^>[7HAOqmحv\w1EyR@8{= ʒR%'v[ _Ob|&~7p~78C? C?ChK#aPbK ڴxC*v. c%XشIw67fyv4QKFf7Z _*P?BnSSO{M~^0lQMfiig5s6oM,尿vq5r -<}sx6BѽYn9ݩ.K::=IG־_wLi uF ;돻iNR=;Zx)Z/l;bFڌM@]`Pϛ3v4wAԻɗY{̿TvO\Ѷ:EH3Ww֡ktzbhnH-p"tJ :'Zt NWtBS,[D+([|! A1}bJwӭ_R~Vfmf8zY>mEh>ߦƞSjAK ߍΩfA"+d=_řew9|Pg;E/m> VXVV_b4zT}dڟn{jp|SEՖ ݖZZw6L>G,ۄMPAom[ }s$_9p ZD#tr6}Õy%ܞZ= zj޷O |ےPzwdqEa@ڹd6d]u&-lq 4h^՗oT{~ӛNl9?hgٍ)-Wyanc]/z;8ϓK&Ik+Z|9U\bjWvJ?֮U J;SNcxNi7snu1z~aeѥ:MSƩ)WJE7OoX-Yn Kr0^Yg%3d=]4PtFBZGK[~k-Hjw[[y_@ieۇJI&_?lr*fFvdݽyq6:UԸ;kW߫{󫭸ͮe/V#%c1לr‚b{ixUzáT@rm=gLѩ ξޱwǐIi}ԋx=%=1(e͒'!,:ſ}%Hm&JIqzj@{ PƃvvDZσ|Hz5|qu7)4?W)V*/(h`\ˆX*pdopʹPX򁕉9uƤ1/*Uz늷ѐ.K@""LA@ׄ_Y"&4,d엥I>I$oJ|zeK #s8adiD3Wvvz`Slxx #S\VWC- y1OC.N:T|s}gᦼdj'Jbx Ǧ$8tBQ_ <[%rp/`g{/Y^[0 !RJ6T</*MJNZ'tlIHNE|6]s>)KQ20}>X2@Yɞ9l/m\xL4K Akt&(#[F+kP.^Dz,6wpvJ.. O҂$c!Y $-7HI:m*-18xrBI}FD"1I,P\HYBWB(^Qd /SJy}5rUs)0K-議kIMQ&JR2#s{lH{RmdA~z}ǒ܁֬2wp3wmHt LoR| b7e@XGYr<+J-ǖKN$ݪfO) ^VmN/Sk: YiDe.YEL/H\Ƴno񶆞 L1#\gXeCɬmM0Ǫ.....d*F)3O,CںdjRL.'\R+P6K!1uYS*BRyS~Up3P(نY`Dӆ\[}9a݃=C[ u79Kh[qL9[im!h2b o^xNKL_wfP~"h ;^u`lN Mp?[sKknhs"|s >F ˙g#/oLyfQAFAZa@eHYr"%#K:sAYFT e*j:x쮮/=4Ko۞lGRzXq.8^KvZ nJRqŽi%F+$(EdLRL!SNB꽌ԭ\5q'Y.RxA.ύB*#+j݃Y߁ݱW_bSk9Ţb}^2ˊ`enY /L5R=rngS TE!Xe}e(M4r#[9㰆?aJilyxr[/%P|~4O^ź0bY7n K1IE@XḋHȕ M RD)Q7Yu7I6S2%;p\~,[#S\tޢB$|Ys?/]i|Ǘ i~Oxfs昧s̺̕yzsbjNNq\~0һ^5n3YKjm< \2ndնF+>ݟtlQnHQne E(IW8#炰7o" 6-3 ̔fM藏><2-t96lRO$?X3@; @W}w1:'KUt1MzR<^;R//S*[5IC_~kh|y٥m25mo缕6N*~yRd.4|gA|6f>i;}=xH^ɥpG`26 ZQk:}2`ҿ5\I˼yrC(;~[xf3p|cr֣';f_~׵1]R %K0edQ cA"fQq2 *s7&` t";L*[I?+^~ ߘc-pT Ɛ]EB͗U!/V'/@^}~|μTIE-y!y2+P=@`FFmM1*i1K.($h}6g&ZI | U;2V3g($7ʱPXxMaOZjqpG)sO_\\^!fR5m6-c1m@r2z#BIs#E# y}8'4c.{;lnGfzէt0EF%R6輲,Rio ºGx6=4+Sγ!HYAt`6Biw&[<5etYR Չ NJM vBhW~LrLW|y ~rIRNus3W fǵv d|"O^=LҹRX?S3*]@zXE}P_V S?!|\bai?7@<}h+hq_ jytHwf7דbxK_0͵/w4*#:NHw;1o88JKԝЪr=ğiњyeѲ;:e` G g5ni(ȝvLu!Y d$->$9;Aq/ek YD9_ͳRVcܱUt~}o?ji|\ű%'JSRѣլLi/h\uu^E<ӝVvYhgD {iH^ BֈE % \g@`wZ"C 6W5sC(4$HDkoilgG> p""ؠiIuLӫ^u~j@!֚ZHhx 6'KRV,M!T>",w"˻1"P.OJ¸S<1mA қƬBy` xL!<6+uñ&xybZGGƠB!8fs2ʈ̽/D%kJ.&Ç&)_]Y)m +0u2hs%|AR3E[xz^QCwBMu_ZxF/ܟt^4V- +HeݐEkCyٮ.Aw) {Pn&ٱP/ 6X/Rv&-|xt9_zW{'٤wnI`fO-Ywsn_M.X _7|p_8ijN]0x#"~Kic;xRۊc$+<\{0[OYb(Lh}jCÉUpt`5^^?@DY5i zebMj(ƾr duqcyϘ_!4JTz~s&xkjOi7xSo M)78xSo -6xSovDŽJ8# !1&|t3@p9{@S2RCxx4SG$7"ѵ"bkozӚ5[!I ThMՕ5ڛ^{koz RyYECϋʽbQEX) d\7P654Ûfx o-ё%Q oM3i74ÛfLsZz2ohIg]TABxCZvh&ٜdg]@tl PE w*[#DEPI;#0 !,x'o}3F EcQƒ{ !@IX$n7/nJ-ߞ\;3;w!>_5yѤT\QR'*KVH, %D!S(Lֆhz/rq 8yoSHxύB*#U͜$ry;jLtMN(>_WD0RݢhED@H*sJCdn9ZuËi_YX 7KQl1#+̼Vr yDIKUݸ#[$%0w/7~5LXJ[6l0^/l":\eQϛ5 F9Ʃ4;).Di:'tv?乷ͻpTSҏYZ0v΁aF?SUwL]ExC7*faZ5ГS-xwl NMA^55CaPNXoJzZqNy蹪J~tݫfȝȖ?" 9XO] $y!`^vq |)nsUV"gRPuwpQ^(8YmJ<Uw<k gњ4 ԀB6nM}geC# 6k1(- }d `BQ-nOZ=.̸`aLYh֙g``$AS\m5ĉwV5CϮ}͇}`uʵvF㥂-.%&q&`/^X-O:mᒁI`2QK-L"(:`Sˮ&KJຟ#VyYQs:g*EKNر=n}vvIyCCY_=tM/7?^yS0wPGzjO]]QRRTJɓK*"Pڋ".>R$$Q䝣\nޚz#1z;1x* 8vghJ<"p1r4(Bjq(q $gڬKY%ZFH/ I9pV\"&t9†W]m*SGblRݹ۾n;ԮK.uc>FG\j}6ϴ1`9:Ҫ uߏQA))s)'ěL(OUDrS: 2u2ƈBi$tK0DYU,ב $PL!o 옍Ԙ`$BE* $2謏 } F9ᔕxn5qSC;AX֞޾Q܏v|_P?k?^STڿ;.o>^D+!oEkS+x%ƿp`-ĕ,ĺ3#p@޽dyJnT9=3qF*m;K$RMDf;@xd$5f̲.*vM[/, S2H;=1BIhDms7\P>1 ;.csti1/tjzf1646vv Znk͔Ù)1B`Z]bUEO"b'E[2>Q%RI$°`J-&}DDj2,jT'ϼa/( /"1\,Eeځ-p[\{~} 9%3vgP|&w TUsf3s4, Wm$0H_\ *k+7d t'׏ur5;kv'D)"md2+E2p@U͉%43n7p ]nXC:fǏUMΟ.;qn֫MAgU~9͏r$?~jbdR9K-J :x"rh.rg9a$ Ф< PSx`vHT<1KKI 1mJ } {K:?VGx Y1Fj.(ZY(7<\̮SdoC}m͟iohmP`iXhmPZcֆk9ϋڼ0ZPd@`I\Bi 3W/\IÉ"Gd@`%ј+ר1W塛++әhV#2Wј+x8NQZ\w+͸~KuRlz\skO״pX}qd r ?~̚zs -*I%}Q\>}v[:"3ky4f5GcQZkLz%3/L˥Gd@`ј+c1W J)#hV#2Wdz q9bPZ\BvG ߏ{=[T f/Gy?~7h{@8cN`sHfp+KjH0eeHXRv|-iG^4EGdLN;e3%,:=f,K &RGI@::c1$X rhr*W9L\>c=ZoyU ȼ (R6L6T82X+0g@9}|lH.R:ɺ[G#7ѓidILY½A j0R*.R0Ff$)NE;C㰴ND8$%({i! Dt&Ւ(De&v݉+ E]75v0Mg 5Bs}?Zäcf75MpgBÄ|vjL=XS4z~p-"}Q~4-L\hC_>/0-k)Ja~+>Łu,XٺvgG*Ǯ[Aۦp;B nS!1zG`߯-IN, 3| E=hlz}'`Z@;LT˜ F-Ѭ33rI)ღDJk,ͭ84>P}wf;Ok:!{o};J~>|5q7F/77#-(Q.2ڨ,C5zѼ4ZR酀^4.Ȝ\@l饷ef t[Q/5w7Sť*W: v*nÿrq#n ms?n6KJຟ#VyYQs:g*EKN}wePEǴv0L.G>褀WӸA9\"`M@W +OC m|Rk蛱c|6\w`}cam+l+7mlv5-(y/؅v{M1Y_=tM/7oo*Uفn +| i {41(/H Sx9sa6deCxσt-k=%i[Q=dR99BZ11[o^ bvPbTd9 6K-]f˻ƛǛzbFzhйbcv7'7!(8ƓJ('6~ZDFAO~R>("MǤ aBɘŎ1]wZ icg0.`*]X ^ǹoj~CW^ú[QHYO'"d!Q7BrNdKKYa8Yw쾁ŷ>O6S_0<0uWu{գ^=4<ͧ_'"5N ѹ3F. Y'3HS3Af)(%*VcKp}rFb d *$ @`9/IK$Jۄ!&ƦHt*ӥ Bpv ,qyHia+(σk>J5CbԬKOwx>&'mtfA^IWOW,Mlmeo6|/bW4 ǂ(CBbB)D0pǿD$%)J@XI8J sR1ln޹:|uy>/AFưP* )_bB=-fiC1 cONOIw"dJK6))h i"mk[ miC}hV8H j"(_7(BDI_KP9aIz@x[do}L ]%XSA:hW 8TY[/zGQB*T7*ke*N M3M_F'wo_CG֭3At7$Mbͺj ݴHQX] t>51L3;qX+j+jp+j@+j+¦%"d$p dDxʡdCF\նIdVXS,)KJ2plTT^|& ]Z}knfC!ac%\ γ?M [,;Z}1RQL')ԂӻO-6FfE33.K&;IR"^^K=8zD/ %-%2Mr)i@TbCA ғPQ@SzsƜ- D&9h0 *"$7Frep\`󐈥>e>;o tqr՗r^Vom½{Tx{t>QAzwD_Wn6?6*}dmi! >jcj ٣DiHh6v1!50(G>J] Znyy4 =Ϭ~eËM r?%.ŧ?Ct2,d:r4^h)OB(%RH&TJ )י>S:-g+O)"ID7FhSj<,]4ހݔ<}8I64b!icg9ϖqd਼{'1J 6L(#Ik':KORp' I`|SdtKJf RR1d >Uœ>sNsH`aj Vr,l2B 툅G{KOur➳/36?E/o'/#ˆM5Bb-Qg&%6I+!X8S4jDƴ3 ~ UaRBG UaS;ODgW!rO@(EpL g7bˋ&.Zǡ QF/S&#$gDWU 1Ejga#EҰBff( h3֤,ȩG(c*6 bQ=bcm}'6x{4PK;ka9etyi/$o%U/u|#6&+vmdZ|z6{)Și3Ӄv6&y*o9?K;R CHNC/k: P5!n -̘)]@a5`R\9\j-YAG g<Ύp 'y\P)Q1gTrِd210:jk94spHut7t ϶nqQ<2m薛^rkn>f\j'Imv i8S#-oa*E0ELT@~ooeԧ;sG1?^N<+` r ï(i~V|gEk7D5 <~G귰'þݽpnyOۓ.]1@CZde\9_{.c.8_C},]oy  zlX}=eQ?3g.쯾`VW;Ӌ|/?-,>3hU+3.Е(3ng_X߆z5f\-zY_Muux09(- S3(b$Yuё& 9 . /‹)\r,H.*$}.! "& Tݟ"OZKM-X>D!)78n7͸e^a&Eyuog'mR;RS:#"Q`Zvǀ|6翟pU! dAɺGch4յW%mv+JJX !T )YE)x&E$U)2DeT9åQHB2`E Y8{L#R*/"Z oe_vW=:<(TynVZu* gJKi-d1Q`bVu< ҖAJJ `$X) (Wʒȝh}$Y8{<ոlw{3v^5bf]1*)Kb! ;-6DY`+RW,bg(]((}dΎeԑ|UK38ʊ"51bsssX9e̡&N:a)=c~P6U8 琗)rՇIW[6on#~??Z,?J/3RDʇD59EËWEC !PЈGAږxN Y$uS>g$INdyrȧ,dS`(ՀH2)dHkesj,*wQ(a<ӪH7a4 gm1!!ٜ3yzGvwk뱃KѶ9k[nz\\w+ٷHR &J5%TVJAm29 UuE%ztF#}F[Cms{JB#ɒ\A@zE$_$zGJeňD]u$rX=K4ޠ˞$ F $èJL?X F^7>v >g-C%Cz)j&qwو/ځ/,s0t1+hs'150f}!21 0 >ח)!2?,q GԴ{kj>_L#4Ur}#ю4Kﮝ]vytn~- 9ʞwZݼwVe~wYc7[4XظWp_|ъp,m-\#5k䗇[ e)OB$Nh΢f!S " "9,)VwK:k#Olg[޺fɽ  ;(:J̆3b<p Q8v"0&j 竡$i6M~м-/̜8otEmn `a0ϋ^w/Vu*Y/";t,zQd#01^L>}οp#{{P,wf\O/Xv99#u( z#]koG+N0Tn=3Lo ӢVk{h-QR+l@lQ[q9U%\l}7;-N5G{u5v3x 5H_GCUV?VuAW.O5vợ[*IZYqVۃnvt+ UB;yҳ{!V"1獍kɁE9hZHթJOȤ2⦛'yN*,w:Ii?}ߙ7ծvPJ;޵8UŔQGh6+jNh:p] Fi#b\莭jPb8ͺoW?cp/^] Xj] m<H_dr}Ki ay'קo-Eժ/k΍=Ћ.zCguBQ~c^^72:6WԤWpڦڂy}աooݵŶFʎ{]Чwk-Ѣ&B/x̼V4c%,wR',O?zC!xSjޞRx뀯1\z) U"r.X4Ah7g{.:0@nOiSMz*r<4u4taR\F)Ɲ6A.ҵ'_1&ia#G>fe>ڂD[7!x~)ivY#)= 7o>瞚OgfƏ'M@*Wܓ!m\`dઙSf`xh\5~2ps-=vլUUҸgJujE͍X_.=L~>ۍw-'TЇ_L-ޜ!8))qUC}߄5R_|6‰8u]Oq?N? oqyO?j`?`['?Kp"QOΏSNbr?-w[]eX0bSMs#eIlϕfqI#{wDut>ee#a Nw KNJRr0= L<5k=v%׬g%(9/ U3'W\d3VJk޼Z$Wv Ohtmb-egk0G3 "Wڶlah k_GhkuJ\[yc>ewp@> -k~⿥ogv͗ɺn;rvȶj} RR`mb2sДe$="JFWD~mW^/ J' F̾Ko&6e-OGǹ|\5PdneEAֱQDMAh.kﴑ]Y`\a}RH !6Ęe,+LZzCĎܐ!$;mxZN)3P 2T:#_q T!ado+qj AQAkZ޳m"CT)G V^ VeYuɤ9IbׅT(BTQH-F\vȥLF*!d38fxcFyTblM&ܐфdV^{!SHhQBLc7 ]-`%$3C )ƊPa*!`5ThTku&_mP`F'KJm$Y 0 n[߇weq2 |:d dҌd}1/%} ڸO0NOMʆb7MVU(*#sHT5DA+I2-*{'1))(9aM^GkIލP»≐;Ba\:PhhM?R9 50d7M9Z4Vjh)FȬ,B߅a̺\قd/4DxԾ8%vJd%PIQ)/>@@h5"aB4VzU"=6to{ LQ$E N+rLc-OhBk`QQXkԁs.Ìi`;j80QT,Y!W&7W6+pydJU8FmUޤ8d, Ac\pB5|5uJֶi19UL.sq CXFBԞ4Vh4ntÜk=E`2!KN\ѽs( J[r(j) c['lu:UYpiK!SIBѤJLIlH`  \DcT4`n;#s . @KJZ7!yA3 vT)¨(ȃvTx&D&zZV2 ,+* :6VYh\h= A}QPVGفrjGMۀ8ڂ3ΎdA X?<9YKk LPqoܮİnhl*dO+MĐʅN.A!PN:lBgU C ;bC!eE HV( ĈOz@U4h =s6"ʔEq19"ʍB 3vRCc |!kOdiL$ȤO HՔظh c4YÄ>?Je}05o:k Pٗ vGWq`Q ,ga,+ΐ:t=`eFpJ7a"РƇJFxU{Jx3';`8e{ %gJJ|Pksڌ!XKo&.ń2d[?& ( 0 ]Xj7?ά#Iا] Okx0ǍE-NS$MR-k7,b'%6YdTUddDVmfK$/-i+i1FCZNjAø+|eC_fZ_BG?BRUl۩*uZ@ZqE "eʔ[ams"0hs[꭬d|tpmZKi zL)ATj}y /q4ۥ*giyZ[l:~|7=/[~.evW!02Ky2zi #r+KINhl9:N%ܤx7Z>?7&{DRo= juIrdOC$%LۻhFOWYR9mJb"!Qa4|h_x4/. 󋈆4jB憑1WQa)^;K(ybF5+F9aKqx1{N})]f>쏆K.Ɠ#\/14E؃Ĥv/C/W㇞LpzoBm> Ssjm.~6 r I{@8/g79t}͕ᅴpfg6Km=~}7'|1֙a6out?*ls4lj{3J *E)Ib}b!r*}&.5Wd@-ɹ*FZQRM#%3Bk]. V"Nh{"栏+Ю`% CW;z<:G ~P) ʣXO_{M׬W;0<2>W9ͣB+>Y@h u\ysMDW S3O7ZM -v8v^N7EAL֥v~sTkYԷS}SN>[~۪7ITz:}]߃^t2vvN ;;ag'석vvN ;;ag'석vvN ;;ag'석vvN ;;ag'석vvN ;;ag'석vvN ;;ag':;A[$,tvqy3@ڭI B°{dzgۻh\ϼ.> -\ ޢ/fɰϔ,9zۏo?trK!ڑZ|= MUa™eQq)/dYΓDoB?8eIqmjkrW9l{)b iƩPWmkxzHC;hD b[HBK6yDƘ$rFيJd8GFT,6EF#JRY/f6|&'Ɩ 2^bWq ./jWӎSl Zm`i<)jxLd*tx$]9dDВ88%LkTo$!gHEI3A6 >p>fY*)+jmP³x)~jmeh"EܫV_q/V9|d])i)AكW #dt(%3iZ-b5q[ڃ]O\^g5-9.2R..CvG#IK3!&OVLTbCJ%%є xmLvvRa58f`tSR5nx ~<.E`WM A0:`t#FG0:`t#FG0:`t#FG0:`t#FG0:`t#FG0:`t#ѿNgGkqoT-Vr2pbZә⥬F^)QOcPa_> 7)|n擻YH,ņh(W$ۨ!,KR4f!$?6_Ow>.=RǶ$A)5.-|k~ʷVQ%$ )׌O4dʃvi庿I+rZX~ nrk߽{6O=EEۅ|gσz-x?:WZ3SMcmK+ZJuZM8 %ʹiŵo5NKoS[8).^_=ƷY,6iQ#XgL#O5ZBHcwj~^7 m)z뻲ӂ},g{^Z6))' 1DS26=K6 24 oMyY/\+s }#*h}Hhu$]gϧ=ZCvG[#IW**Ƅ*[zwJ&q+aI_XBn%\vgpvmx^vyf>4.&n^u"x_X=XWy:<fOTn'%̳6g/&Zyqs oj 6A[>$+C3b[A>l6b[Ai : Jii϶V8eDmF;Ӓl<-Y|T,f0 ~UZ9 ,X$*r!!`^'ּ Z:_Vt$.)nJuY2(4qo tṮhJrk9$$ո7@YOR'aq\}FLX+4մ24{D)RكA2iq7o,OOB$ ϙ+%LO#Wﴼz)I:;) H`"i>J0AMX+jj@N*u$F0KmԚ3LIF<V B$2l"8M1N6+ߍSo8c_N*@{E2'2ZA[5W(gcP\\Vt\rB7W\)ʵ>'t*\lJ+HJ)h4Ĝ8t⪳AW -;zsR| eӕ W.Jŏ+gu9b7,L\Q_'.狦fs/YPyAG%r&,5_}m\iUQ%JI{svo%̥bx'bxZ\=OS]1WL^n kQe<>~LbV]XIХ ߥVWHo ..l`*rY1SGEO>*YiS-ׅ{G#)3ǩu#,=qb5;6"[}hVN[-ihꊪA-M)9!/2BhR6gq"En;@dw^$˻(rKsI%#3g13dJQ(.PyP[6ljys v aiMŔ`:=1BIaJіS-ZgŲd< -ym"F;cl <&S`:YFAQg˱l u:++;+%f`AY"6Q%XL$bd9)'ʐj.J" nVL2ލj2ljT'ϼaILOftv.[2R@9Sm7]Y*U|*)\ecQĥSi`!f4fh:Ʀ) }_<k6Am=~gʀ̀}qֶGM !V#?e<_x\*kGthzCΑN# &E4X, ULI6'l?i2gi.9H̥ :l|JeAUY !#zjA04mُ,BkK-׬ceVV8?6L+m0^e|4LEs(^}m 0VB% @€쁆HHT=<sw^NKg9a$ Ф< PSxـq5HT<1KKI 1mJ } {K s Z} (1dpF.Q,|"0R_io = =)LG5[iN$P`aΆAU\PZmΠR*J{wȠ@Q\N6?("`pKjކ58xB:c !,|2JIsz5J¾A|pCg{tLVw b 9a.M"H [C|ڃyd'A ԡkJ_?~y~[Mb- j>| ^N>mn=ݪCW9܉9 d3+%M 1 Q҉`Up밂d۫@=$z $ǂDؐ |V!)ꂦm$!D9h- kJN$/DC9b9imc` Z@AOo=9єϭ װ TaMh*I^ϜH͋`,I09XrK{ԇa gy,DFH.37c3.FI5əf͙C Idv`G7A0-d)sj|xs)xœ`-\&eؚ8 ޅb2yBa;`2x:rofh+}_/ڬY[Po?(:hTX9a 4(x1Xp1{UxVv#-JRe.qL2)# pˬ,k9-q8 cu@ߝwo)Q~EYȳoɳl#~aҏv~p‚JQ`|ZĎ>a )1O@u4\o>\nJ]e{>7&jOYAOܔ2>Zwr{\\MhΞ&)S L 1ɍ`A#Fe&PBklEB8zp\qozp\"Q$ \,@%(A5Ry C$*1ADhj $Rk3KmԚ3 3%uT ge' ma&NT},G+ GWJ-Kv jtW3] ӛx|:c#B2"GH"K)(DdCtHX XcxuG-[<\d2p{KLNZj:S&13*QNzӊAj7AO ĖW(i:?H`OfE1.{֢iإ%T,+("ŵ"Z4*)e^ąTTm/dS3M;7{$zm Û=hֻ ݳ&ӭ3Ĉ|)~ urH쟯[4]fo.|lк~yKFۯOYs}o`x7 A6:Vo73O~<ϊSRǓIYU_uH/;&HZ[uᐋϟP3P-nw!l 1o8AU F7فe@"o嗭Ư__ort;snPE!,Ik&QՄU8:6zD:[IUkΡoXaųu3/h5^zQ҃7WO\n OI5:ct>5*eK A"\R&|FB &FW50fJZ΢ƽhxv4v˫?W/`zDžZOeYi_F6eJ2=y^ =gv6WA=qAB M6oRvBױ-4-{*ebv]2MceӴj}|'4a8 ϧob'_P8P #(&9 hJĠL Z i]bbnW5< Y Hj6uBnkrMZ]b(9b;ӎ]6vl=}Jeu&IfHfcI)$C%ї 9#RdԶc&b )2#C E'MmM&"2pd9Cc{ؙ8a/2a<ؙ~j}Ojʂ)KdT`6^\Rx&('lѣ:`ߘ huRg(ڙcB KV:M{ҌPtp :"ޜvqL7?XggZ]T[^o{A "K9*.gHR$IF%%Fb]܇]<{ؙvjePf`¶P5rk4]lv{6EbMe{V\F- ;M\*(49߆ ҚPDS]1k_h䗹9 ro9ڟ;]o?ޕޕݾ3Nmȏ)Hk 11V!EJ1BE XU6ft lIT9R"kpǙDxh\6Jp9 TN,%C1BTD(%feR7_uIi͜{l/;|mmbI9gtIP9H8<:) <jSg{+BBҨJĆ*b>u5Ȭ1kBG+e~X  "ABSxPVOoGZ7#y&YLt: @RMPHgD:aѴj.l2Oxwlܒ qE/?OF(J r2jYI]Q\-K68]^|5N6tHsg2G~+;<`j`8WOW6r]}r>i֭χyAFkf 6P SYDbHђrOxgBgq =RVշClꦏ780NN'(#J/ LJh! -C,Tp$tJ :Gj-u: )]tƋ }8l9JQ$@h6qV|y>-TS4J!80VW} BўѾ[ؓx6VM~BUuڃΩfA"+<{V6ysnj1ɚsp *5C> M M6;j*E8^oN;ڲSe^N[~K^g.h2# dGIv_ ɽR %@T>"9q>R 9l|E'O`Ql` W %J6M;{~=D/hC ڒPzodqEa@~ڹd35ΤEQ; 4Tg:KăMM]o]͇Mt˵1CG>mr 8%XPieYSn}/JI]?gz_ڗ^y@=aCQr<}V oΧ ^&r8o^&-Y$6/ʻI%kBTsγv'*m7?>اZ2.%Bk c3tq܀Η#>A^  [FmXX:,0],3I[,_`ߙ]OFӹew<鏿[MyFQ$kWg˓bԧO;iI(D-24|aO;ȉ24$4o)Ya Hs4'3J_MQvaښX#B@Bҥ>Y6 $}LL6b4mpA7jבHVƭz x7If>#/Glo8o~k\U68V} {o䙲;z^6';@xT huUhoMK%r弿v?OkL+qU*H ȩ E79CނS) XbcƦl'&};Owפeƿ2w6ȹ;KWzˤJQ6'1)JbBYo]oa2_Q'HFXG- d݁́Ol^:ұ}0lhk_3672Vj6.=:z#t+լϴ C40 Qv&@Q bëjD&%ȿ$ =N,9ب!9HIx>rY+FDՍnsZMHʢ)-Pk5I$􈘊ѩ&:K9=Bvߺ|fʚy5ި 5>{w'=筠UUUŤ?t*=wk_߹va{ylF9cE #&a0  }L.xe)Ux> W}@ 2\UVѐNd E;2tYV!+dY)mtu4&PL*Zb,l.A"*$fa;L*I Ctջ8^c=\QsJ_3FRidЊ5EJK E2ȈBS++ kPt64%b ޒzd ]k+$ D`RtN){j!BEXZ 5 Co0@ʬ34yЂL5AQ]$2 v:9~ { 5SFlT=r:KTg<}=h0վ_D)a޵$B䯻0s;ؽ`n7E<}IJђv$L[&fYUzDe"Pi!:Âg5\,Ѩx(b6cPK<\"J2p{KL-5Nis{u (')y2\wa^Q}8_Xs|Ro=OR*-bPy;$/p t0):+A|G ';tZ:.5rRX~@Xŀ?J (PTڲyAd!TBg/x`xI2<9C7՟7!> wS~ްҥ=334O :41ǒTD}|: ̲/>CndAs41jhw³T87ujKSHZ!0:Pjq aOo/} ݻmpŻkd<3M'Pw6WT͚=cn_|Obsn2ALRMj?qx`:Cw`xM +8QǺ56[v)/<䪍{K' !p@tr*g $,;&#YGIRhąy!Dsb4 )T`_+P<ϟ8MnJiBI0>#ND3v¨\(=W^\0GWDe n`ysDW[7`of&G;WmGO8PJ91҈4fϻ_]>&tWb|bOܧ}TYi!R}ěON.;TFB!i&BRB<Ǎ:Ct8aF,pψd(@ 1P A>罥Be=ՉKY+* xs`6݃*|`@_~mI KJHT Gヰ"1 ;?B e2?K?Ep,£\AktnS s@X XX+U:[.n.UIaE\;Y&!n#tb?ߢZyF%Ȩ8E[S/\I ('$0*QQ#E52:Ř&ŷ bׇQȊT4b18V#юqԈrơi+}JYV22%4yD2riI"LIUT#ZC QZm RpjҔG()}0(9ВF"0DaX[WQ/6Su#"#eb?ިG82DS hl2,NEBI!( dԋϡOEc!-qT؞.ѫ]#z?mϹJF5O7F?P%D#,ηO zT% )s#] g B jeђX#(ŵ6=̣TIqIo7C-:=' $skࢳSX£Qd*" :[(8A*$RGp)ƛo:quUCʽ?0%OAhô:iCX!5RY>ܟw=V[BƽtMOȠ9&"1"$AZP-I.qh%Q 1EGmIby]@D0J= ,OȃFǕ1r@J&aE#iKQuYf(ႷUlIK8YaM4rtrȰ Ĩh$\Wg98~]Y ݦq?ǟrμ23V|dGAжnY ʁTHa ޠrm=CftbDI9p)(eԏS[@+`S)ZA'ec2|k@N8/k HwJk,InI=:8OS18&ަٓ*;*}Xwir {\pc ^$@Ec V/C3ªq$^\襭/;uY @&9g*R-։Ƽ3VVFqn;>/KO0YE]zv6XojzT8f^=H*(}@Z)CTQg)po2"hDRâ%SknzX~ z4ZCx[UK./pMȮ 'dSV2QT`)r zًd`yEo~8(;⣚|9SO?t\oO`6lM3/d^km|~wem~*n,BZ_'Xk'vF˩ T3'^@嫨GsI%#33PQdA:,QDhCyPH J\3uQ9kzai` 4tzb,".haY)$ܞh=1' ):f4Jv1AJ`)t 48v|LZ.Wc2JHJL:@ T jO bd9)'ʐ@5J" O.i/a$Q@MEjsIxx0s2.HCB ԶT-WcӔ1F-UJqADO[?%{=aMVv$&)!iˎm %AjD-ʂ% `gN>pD2#w3<34$Rc4f$O̵׵ -l2Ae9 Az_ ygg(a?CC;4X*Ceqڒskv;v\ع&"N)$AR' 01D"& Љ3n*sc%)UBIψ'bYD*Sgz3nf]]ACfL7u/.7uvcۙm֗{WGC1 FC?O#12`- @+"ѴҐU9} $B/D/K~pA/4(B⚑$IdqM>՜)!R sDJ/U6F)D;$1s^%Ax &7`pR9&{qU|= bE_V 35{G[{@_ѴyHWqDhs׹wz}omPQa [[WMpwtz?ng_`YZ,e#Wz^ȅk.oٴ˚2 _]m%sADȭ 9 V+um&Q I̽*L~!eZh'b;wVartߟ]=D&1_[z֨ӽiO)ck_V|e"7-nB}e|mŶV)-ʚEY(ke͢Y5fQ|3-J@s^JE"UWT]+RuEH r(RuEH"UWW+Rue"UT]+Rue"U1P݃H+RuEH"UWT]{S!I+H"UWT]+RuEH"UWT݃RQ84qhes%y,EⲵFz8 ,EN8jVw)s%~bYX<{YkEnA y]%VE&LI 8uQ.ZĂR AJS ĩv! ɄE2 ,Z#ՁH+%bz^H͜ 3g}/ҏ!v+15m9tlyanxgC߬Myt?k,/[5-:g4%G1xL<ب%Z&&,HP k"iFK|A B֍ 0`x`6* Aj2j S-$GH 1()D ^*H\ J$Аkfg''gRXQ3Βj})}ӡ +D9Z92yajr;-ʻ+6Ú٠0|/HBQCԖj,)YKSv>Xͼc\ ѨO?3-NC!k ׆<\)sC +Z9CtJ(*Q"2Q2RN <[?~rrqgq< !/-OX)XD#Ob .ZF:kYFҤ'Ujfv}L2^n6pȞR 'pf6M+]ŕ slzNJw]rU6[ᜱ;=B' i2;Q0⬞\db2TKgAf䴰Phҙ$dpja$PC~:?>sh>>9Wmہ3[fYP̎]CA\CnZWWo%u:B跃F-l~ 崓ٯOrou.L7ou]6*[]ׯۚt)m>\]3Zax5l|B˅ 4F:BIz֣ҟbϦo߮'qif\Dl0يmgvto#vqQ0K.fØ1?&g0@l_:E6= ^Ow\7:݀@Ǥ |Lg1ȅrJq T86|c"pZx8Z~|zfMFg/$PǓA;|}Xq 4:tB~ǩ|OW#urx<_?~uo}~2v4=t.,F?ׅ 7F !c@M"*-t)bd$8Vg4}.o&gϫ_gv|2ofOFǿzۛCZtzlN`MC~<\lRg1?ݳ ;^m9=ᇖݾ6ܳII5xay:7nҞ#e~WVq},[emۑ[rixrsyFlC1o(z=7"=S-sE]z vTMy5]; =pR4U Wu5zO;8v@vفڥИ89-2%9wB= PM)w5}@klKa=(2|kƞFdj" AQOE b` 1^Pg,(tT HG4ǯ_h>IVO)v(7?6JR{/_ Tu+zE7*%( uS$@(o)oMEɑǨ^QjBHg!qz^ q,tJ`UD;$G؄ 72fN1޸JoX؛glg, Isxe?b4_?MʎF#7I)"%IxD(8#Td-'Hd_@0`FC`6T^3!sM.~fa˜Ma%<;Em3jڅnxy{4WȜ &S,x%&:._bho8~dlJ$49d"paH"L+"M 0685I*Rq5D%UmJpS\~|G9޳% xiy}BZ̦C2 JP\Xd9#2TF ̊hx yK2dל9"RQ2TKqj=F,m!ndVd,Qg-ךROmZ+  F8Bdi޷Rբ<\F.J?_NRN(Ԁ+xR`k?Ěw 6rs_}l~Ok\K~Ֆxw|nqƬ&4WeHep>] Zd̓ZányքӫPT Qe՛TB;*aoۖ+5 pk5{mۗ\$]w#I4-ڙg(;3ۅBDz䪮^H,Um$#"_#^,#c9g{tt=׮O%jٿSW-@hu ֊Z:kZuf~zA@"$}C"$J9C9:4wp'LY A&AP^30Q!hJVJqB LJ1K,pZ2xo?00fPiM:&HꯍAs((PE>>Z 1~,7t)ǀ$P|:2`RJΚ߇.ʔߺ3݊k`-K6ީtޏĕjIqM )\OF2]T>򀍝5ڊ8 <s֛bw3E4胲6kh: 6;}@?ɋnj=!{^5|}75O0PմߏSۣT?wՔ7]QG9󊴱2_WQ5|ĒjɅ+3؇ё6g֩Zj-`(cHN^%N1$`H^%Bdr(sc.F ˙g#E(\,Bi5 ^DA*˔>X"3$$gE62u.3l9mMg2$ 5-5?9x-<h<=E9RfCjϳۏLoyyRJw*-%,Hǀy{O꜈ [n U; 9׈$ U W' @8qcT‘QnlMg}v< &y+m'k_E;Z1Ŵ|s\%F+Wlb?inqVr5c22PtoT克9Y%'GYoQQ-EY>3:ڐaxU৸V2{!4 @ PX(P G!SR`ht-b,(c%nbX AjLАŶ{-[癁› ^,:j((\S6t՛lͧ(&"Tsˤ e!YxW&*FJŔ4;MNA%uN)3-6GK1|I:G-ڋ1`(ZA )\:-J YDrdI U,H({=6 E{#_c}ScSm䨹aO2XHRN/w}-M3$mdIZr4"+C" 4KVfcgHk,H 2#k6,ss΄0ǫpK:ƹWS?˻yHW/k>ve8L ?g..}~&CrzѿpQj-6g}WXݤ/ͫpF+vOsW_?a݅I~&U.+P5فrMU|{|DiY_4>Tw%Yr/iR >7)Q"]*ϲmޅSq? =͓f,}ᜥJm;{ZJqAu_,5ؘ"W7^"Wkׯ&aXO?4&_J_x^%[w3H9앤θ`p>dF E8wq/ -r穏lV$ysKޥ2hriV(7~y>Tcn(ǁ#SF^}fhh_=ēhf=?.Lṵx%}y|Urj]ISԷO=عuCR=J'Q_ Lҧ~ټRXbgzE[wUU5W?WWqhI,ܱM])lA-'A^6 ^|-8FqRC :AځG֎0\ }́MX/h'=4=ij`+q6)3]h*(bLDs+6ܚNK7S=UD+M7wu[. Z f;'&0Ӝle6l|4/RB7",ee`X{Π C*xlWm0mTkN%-˺#2E!Oc1M̕"Zk%"cjFV mvټ&|g7%qGy󼤦6Y+ۏWovۖcwP0POp.cVOO_[@x)+)5ɉ p PuzKIϒ2NA2 'x m]fĸ!Ѥ"+i@H%@+G'gOP+ ;]hD5qg'7wi=H |3u49?tѭmηn/jvIDRԡVJ$2L 㒴 h}6CT"co+;CojcH]3+ ]!\o@ָ??(|xdR ?Tc*R"C~(iRDgGiI".?wUZvwU\ѹ])# ?%tErWE\HZGﮊ.ݕ6 +X*J}*Hkı"%B箾Cwe,S/%ܖ_ ?Tә*׳^RMz!wU6Fp*\*R.EB:uw93d)P)J5TC7jhc#Yzbq^T ?OGlbs,ט׈I#|^G=^s:.4}i2T 0(I>Jߟ)%MD4GQDCȢ;gf1"RԖf,j0P. Ӕ*trI>pH}Ujll0ȍ~ ܣ Q=ȹl\ Y>lϰF/6Я0F\|}-9xyFq`X"q;𫼗I 2) S/}R;[n0\kmJ5RJK- BF::CZ hTBNukC#%hBN0sZlhab,|2f6dVGTy;67l5<5{_q,-+ 2O.]S2ꅭ2/TPvCP+6G 60Y[KTuׅS%#8tX8, C^DYÜvB!2ŐS0ɠ~ERqEA$U!ZKY0 %D!)LzIWQZp@ޡNȹFt%!MeԪu}>7@-U̵l݂oG$o vo:VjY'6Db&|""En=|FQ+ Qő]y|qkI bG@2RS 4ak_0O2B ˭:ZIt7"nncu{nk4?}|#ukA6WYz*\Ќq,T*1FSLn%qYA91aDaCYta.Ըs c8cp.2EZ1 A 2g[Y" MHeb6ܴ2GL@lS*Z#4&$j!mY|afr6HD[ 3tIL!Y! X W6[<&pAyxUFyoŴd):T9lNFH+Լ̌N\y \w_QKq炾r'oEVJv!{iC, $n<9*g ~Ph(:_ob> &ԋ@Qۑݵ. X l /ċvM|nԇDPʂg;'78qIv?PYRS^PtRq#L]P#K3I]moG+{H=c/ w Y, !jIkT8e9(Yee "=53OUW=U8/zG{~@>|,]_SG+8@*7,KorV8,x](y+M=7A͗ *ܞF>/jMŢU?/>mfhq4 N|W[3|O|4>kXB$rLJQd4A7 |`N)D̦Q-eV6E !Hic,ݛ|mGbJ'gkx;,R>[ٴdƧ'N]=z3'oW1&x% {'sG4.I?)=஡ۢЊa @֙,3 :m˿??Z>y{x^Cdvi6Ot|˛%t[Zc;(yf?fr?mNw{$!$}8ϳ%\,yZ_1l>L=Z^:ɺHtJ6NMc>t>H5lҤ7hyn^SIZ.nd'Z9vC| lE1!o gyV+g>;9|xyLaްV||rADfwx;XebQj КhRo{;dܡJhUg=k+S[Kh7Ϋ ^)̨ 6z#M^W40擦 [>mu?7]M7n٣ޛeOT*_t@y BE\{[8JxxP3R5F[u Gը 4"i-}oUnh 6J8^ŏƧrq[ܟ;O\EKS.(ԡ x?p~\BA3P38fZ}PJs&Sy!c3y%וSP]U"zDC ːq 򔓳˜;I&W瑕,~q&@C!2*Y@3Td7s#Bʩ,Qs9 p*f`nhzyAk˫^DQ$p(nNZvGZ잨x2(ɋYMyqݗZ5=7zTg]KMr~=B:9CF~Jl|kdjw*%8RNFr+<ɃQfK("%I8VlN)U")K.`ڦH䬼EHI+Hնmajfk me[[딭eA[vev/4öm\c4~<^tbGY%equA @N ,I٩2dKs-AfE8{> & Q̦46JVTv$W}HLc)PQ-qv[0 /VvkՆj! <),BZ(O.LS V3OdL>\LȐ+DK&!Fh87|HYEBXVg=URX1}c_*[D7X"> r5>gIo95: <B:IXfƲQ jpƌUVl, J.v&AAíD QùKȝW ;]uiVdIle"IFr1SltotMzۓ-%P#e{|XtjK:^Ojm HiX (P/LK/TaFd!uȜ5I9^fB{D.KRYqJpQHL&Β .Kڰ8R\[Ӿd$+6R$g%|?m떺x7~No$KrkDIzǸ"MMR1lDl F!9$cRL'uzevg_so=,MdPXeʂ*l+28˹& pwo/)) NpQC^LG`{$$riM,a`x4;=UT̥ "Lw2<^O_jizD2dYQVDGTJ*Vqq69,,4Li"I 1v hDx)2 )y@y 4ҘdP$R Ziv" eΤ]lGA>⟏6E@QJҌ:|jFԵnثәbw2IǸ|QI|rgWK*RzmQp gݘK&.5Rr|y_sZpg`zE#GaXdȏ& R˵nYuJ{1NRps~?N}*ȝ5<,|ễ;{YGG)|M}R݅ƅiw֊f)"{3Ky,泿n0ц颗UTTKiF"}&&L=5xA1zE[67Lka9šuQաh+v[? grdn^rp񥭗u"Ŕ+>rhBqkan_Z=gO-2!Ԑgx+̔u¶ځ %39!\%+j:OVŝM'gU܌W~'rmM*s%a%x\`L\C!NǸb\_$ -5'9d3R WQ&,tIV" JQN\9ś:r:+DSt\Y"hM(`4+{wwd )XتqתH>UѶzC vex 5wwτaA Mm|N3Uz!.Te w+^9@={t/ VyhhFM73a:giIIO:fO8BYޤtF !:L>qRlеR^_*Պz87JV\ ʤL,_ZK,} 2ȸ"O@b r[e.0w /s0cm4}xs n;o0)o8d}aRu0!i}aR,z|3=L$#-ɄVخ11͇D^ ]@f24;8rS0ω^eXWv[z[n5!KKp2y*ad*f0i  nuݶWV鋿dzmm,$.4aF8R[51(b6*yFai0$Iڷ{,!=hkq0hP8АbÁi.]]ūM> 0Muˍ*ʁ Yaع~kkѻkѣkыkSP(8p8$PjM,-\It[&fBqYqU6J(X;-fLڵoٍ,B7,tvsѽ~FpmaIj}||  uZϐ\<+,}Jy,܇!kv&u\.cCdǵ6f)$P,8!i?t}tAxc`b"`P4KU9Gȋ^nI;^ҝS/t'K)!'"(ihRt)V2.FQ`KE*z)C0!S&L m2LwZDhD 0[!-llxpU|!ŒJ!_bMI 5]w]}AJ ŭF&)BN~T@bu3Ժj[s8Yy5vl6B ZVnEwMD\QVЌ`/Cs]z?zk]n_QcЯOs:9;j40PI1WY98{%dMشN9p5I%pvV8nZEz{iQA, ohq·[|piG.-a$EHNάvg#gC5_WPAcW fy[\7CpWdzԶOx&Y sl( HHWb4ƈI\FANւC=׌u=ZeAc,69hG&',%! j`QELnW\_Y:Nqgv}U1mrXdQ zږϫPr.`9VMǣAӸ\ yRlL GU{W)mt7>wvTu#|Z/Yo?o?DF4cݏ^i/9vV))e!񕖜UN)bxЄ~'镊>\vpUs~Eb @s10 fbrA9`Z]LUť#3nRF`$,gFMU8Xi9rc*Qt4̺ 6rԼ{G7.ru+pĽsh=')JIoBDoFxm:vAI*JHb(rfSڥs9vs$] h:k}/8/ޯm\sc8p@{9H(F3Aހ*%(^SNa+DH܉q  {ʅwTkFF V\@i`0-]Rr,R|"("}2="k/%%'%bi1ggDðq_A7Pe+_5t; o};4)JYftx⿿q `#OA.FCYp0WQY".zhT'/R)!HRcpR:(79aF*% :D"iI0Ey OR׍$?T@`x^[9y' ȶIfDNEo4C xׄ 8 By~5?)SM T^q'Z޿iu:]B Fl -DCaCVI!䂔aZ^''D_03T$8J-%ČQqIKT$ Lx4n2ydvÅ:fi"߶ǰ9ƅ}XGfz?Ir~8MIqoOٌ4S-/|dB;A@W|9z TR3+5)Vr⭵4]o"4Erz@bgP7f?%OzpuzK&*#їCOG ãkٗ?87]|]=uxK35!mܑ{Wf x su+ťTv )π΀r?K?go/otrYMv={3|٦qYғkHy-"/#g?]0F.oGָUc^Bx.*э7U#ĕNc͟zwMhGI6*xeK}Ҧ:-Ӕbb&mw]Squ[LV '[&+:Kd6*AΠlv=~ҊIW6eB.n,;y\LDD3rs3n9x6/i^@h, .*fF1@Q.WR'l9r>1!e@gEP c+RHLYT4BMPĤiZRI8*t e$.Vͱo~Clp6>{t<7%SFp'Zd(*WϓJZJd:Γ|UAcBj4x3O[l0sb*O)5|dIv#O%a2Ɛu1>GXB7 ]=Ţ x$ ) S띱Vc&zz-#c iȬf#gCg_c4,SS B$F+X)yA0cEBEq,BtDu,CR',vGEgO> ")R ˽ʼP~TRZ>4UG_vJ&'RَX˨/:l}oXA&MVwvmk/ؚ z2%W?,AYQyEc#Cy^P bˌ3.4`!G0wF{IhaUqbrK%bdFCP 3r#c6r#c>]%fb̌`=i} 'I3^r~&.w;2q]2'mP>K8b;)$ȝTFhi{SQ3gEj(E&ojyhd0 3 j$HM* %>݂StʔdRTk"g=bzL]Alq[VQ[.*>8AQ=L쑵IIT x]%`u9ɑ&[*2ڈ\H☁ !f`E;$` rG(x`<9)2elƨyY$`D&EDuAĂ+%^re4>F ^ `a-hSQDDt" ȊZad`(QHL=(1)DXҠK4j$f.eMόYgGXm'3]:&%Eb_łK$ޣ 8|(R/4$ېqL4XG8DU@VjP99V\. f!Όvx[z5{Gq 5nm1Npa ~Cٮ;;CnR8C^ w ҍˈQ'!6v1,KC!fH 'g)Oc.*5X@UvzӴJN:w(ߎ\3t況@5.mg>4z^o0 w/oX>.Q$g7KAc`^;8!lY;jN'kDfmҢR=J9is_5*HK=t0WM&,hUmv X^`#]S-!HwO'I5Xq7o xYi˛hVH9̐цHW-y3\̐zJ2лȟά54*KS\;1nqX،6ΟYub7kXEm回K}?F <Ǩsn|8 6]bX5ǘoݨͪ|-aEe eda\su>$-Ck'I€A$7Zib; _kb޽6y <=#ѿq=r':`7}|"zNn |WwVR;;#]$L]x7oL*>+@q㋯)A{ʶ;YA#ǠrGNGh'bn3Adɸ+hd҂C&uL\U8!{/ ͜rh P!!لA2ex!2 TA9PwgGҀ8ƸCUNxNy] ,fZ *(>( lԅMRrB l Rvڌ\e4,pϭ1&d^?J3;1YhHvm//r~DEbaU`gg)@PE1Li;m /%P:гYDo_H}y7mn }Փv}Jɺ7o,*zo PƱKD8Q〃M=|O%}툺.4?ߍփl|[ >~CisEyU@2cʂ"42K.3BrF$n"SGӾxo+x^W6^:A%c{I^]A{w/[W$(ݫm^=礤C VZVS]S~ exjH.6m2uhZ,G0FV9LFFSJ( n7`N,Ioq;I½~6+-'ԁ{!ˀFX P Y0p+d2V췾)44w)qjzp(3P7eR1&>OѲ7foѯZϦC 5}Tן\8:%&/Q.MT.DykyQEg.4c!%ƖLJw0dVj,s3 }ؤϓYm}7weRio#H77EUI}Hw-]JrFcg񢙳P~11X/ z[Ҿсr-A괶şFujAnt;){LSKrI4w6‡rR~f.?^c~^Y 긄4/8uw~ygt!.FʍGp= muLR(bAaHJx 9{%ƀ3.]K",}㻳ٜgilwXrH3GsKޥ"M~4M!źUx4eHyk\[燐ltRŞ]Pg_W۴؋x>K~ҼY'qcҞc7d*T^.ƳR;` M FKQC#{ & =-х=<}wQe~ޔi?/׻_hơuC-JIWmgAVZn[4ȿr/+78F~)- >rr/4 4m^eӾD.k @Yx3b$9SEчƊS9h%Y_K}C%8M-"*uA~]) jeCW^J_+W[mv6~6Zh/e5Q2'4&q f/m2Ȣ ^'S|kvUzM%MI҅V0EDt'mĘhL%f˚ʕ-rzlc봸/@{wJt)eе>/Xryp|(pǶyz'L`s_fn/&JЎejm:IhZ_+k~ܯrGrܯrVZ_+k~ܯrVWVZ_+k~j#~zTK@k7G!fZH3GڤZߕo?] 6j5V[/l|f [/ B~}1&j I&<4KieZlB6r7?bur'mM}qk/t1Z,#.. =ɿkurmK%/7PJBZ(] kt-ҵPK,BCU*V۪t[nmUJU*V7jGF1f5@UJU*V۪t[nmUJU*Vۓ~pH&R[]N·Nl-f]y9ZC}`F9tJ`)*{.;ۓyy@c\W =*{yOR)%­o`386g<@y{嫐(eOI1$';@u6Iee\᠌!O!r_)9 L ^r4i91%ch 8=3$D…)w֝sUDs=_믜/A<.cst`4J *NW䧀-udA rDe3O6ጔuCb6qUH c 2d.}Z/m[I>PJTHIm|&Ӣu RLi2}|̀ٹl!EeQG{ ֔Oj8D@R5'EκqgۧOS3Iu Ger Ʊ >Tea@kD'TMXh^Y# UiSL6'H6II4YpZY$daXaL>΄2r1+D k(BJ3=;ۑ%̓zH9l?5N8ZcLM;v'}!DS+:vLߚ> #αhA8MC}yoe|mtfCg6KkݽZ?y{\;qp3> {槓[~OW4]8<sUDAfٺ GYvp7 Oף?˭}Sq_N|͏]ё?wOG^xMkPyݸ]wfg;,wo߀k0IXmKFBH .@"!<2t9sq$*Bt1A`ȨlFBtL /Nso7v kTm!A OFC#W8.$Z3$W)0xB-0hlȗuuuDrҜIF%8= D"qXd٨8$%^gFet1{U6zR(L:f!Mt~(A]2MR}όug݆a,c߹P<:t.|$dԾ8c\v7 /Nhx<4:>ɑg`ak&gYmًDDƘ5/$E|6YaN02ou1gE`$eJ^a`1c3nkqO3bǩWKh5 h]-;+]"TO:CHBASӝ~8 i;v\wNL :+s#’/YuǨ{OC@u" Ĺb:2gI>p&% ^ b\SBxT;-(̐zўGX"ED Iѩڎ3sa/*d`DlcD4="kNh))q/g3'hX3l M?'t$S{>\ M ͍2Q 347P4Kի_xPA]dy^$lG0~J˝r{22WKvų[Ykڭ=Y3Ee$Ys(ߥp#V㬚.s|HσC=xؐ.feb‚`&fD}=0p`m$TM2xYeZH9AH!Uv TA҇lP"F 5A)NΤ ǝQg% c:3g0z>!#kk;F*@zxqG+!Dʀ#PBJyṰtEJ*Dy:AL׭~BA׀=#zxB*^ڬמ(2*SF~GbN,a5tײݍ~з Ԣen?  3HQמ7pQ|s#;K]xI/8 !9#aGޤZ$eu`==o{,ڵ*䥀{ krF"n#QUd\c[9؆D9drT)}jV^K2ƣ􏏮J)?*ivj cÛhv2f1ղn*V䌞|8JYEKhxQ qImT(v>&f?'qaf3Y^йl0|]V,GMqr qy,FM-H"BD'!&{2N ;!8@[|-s{څo6(tL$Kp(^\jRa㍕D`;1U/?ůz`9`'Ao|ir6D8F~GBCx\</Fߛo߾?#?qSjcKvn!4k6fJ:33"3q",k4y//7ͱ_"8z7؏Gz-y* KQ7 UDz Uīӑ}9NyhAhe\7->MBlvbx<3ZK&'$dZW͖pvljMJ<'|c@dqBuWX8eWF5׼|tV&ewlRSd!@YE9a\1Mv-%tg#Xp]iZn1sA[]'i}|t5<\JObiYe#T :Bb:YRYb:@ ;g=F-s+SJ*  Z'+`( Zfa"!8.Z{r]Y-[Kƶbybo!!/{:,+RNLO4T`s8,oQC2JNzP<0iu@pűKA \L}+kwWJr8 BaW?CluoWh%"VW!\ A9$v+-\MɡURp•qq@psF` _dkwB+CRLSu5Jo[PMW.Gӥj[~:`%I.x^5r.CK ">%oo܀U >gu'dPIΞ,{2.~qajwth8u*K527t6.(<¾\o U]u(^T#lϯ=A~k'[wi0r s/f`2-Eovm^Pί/=y*yRHb騊%Mno?]( Qd8eP(NV}8J/ }GMЕeݧoc +jEXL`RvM)ssz˳}bˉ؟W[.]׬v`-qh9 'u83U|ԊIg={i趔I&KxOWAZ5e%)Ĩ5cs:sDfhץ|2g}r2=Um743fK4K=϶Z?Njպ"-_z#qKp6I2텖? 虊|jLαn < 2Fo ƃq"0 i  X̜ 嶯S~H IyD*\,Bȥ 7'*\ bA@uH6K= mc eEՂHJ;OuILq1ʹJ&bI,v헕[cyjƘJC 8Oˤ$KPczgF!$Y>i9o_߷=־߂Nhk*Q9E+I.B@:_ YW !6Rٙ?`OjGDVrƤrS-mI$)dewTyFwe,jneKe SdEJ{F#II!jF*L ѤȢB't[ HoznwBR%T&H@xF1ϔTS;3g=D./!>c&\+feށlO^dg'vj@>zy?SSIb6!@?P||zɢ ֒P* )MJ{!AU^򽠗p!^qHD$I\ZEfqjN9;*Fi]9C e<jF8p}"Jϓ5Z|^/[#oE\p"k2[I]'Ip3ͮt3 o~d^]QIת^1riu]l]9{g ߧV /je}vJś綶5ܵm|U{'wֽҙt<^qŧ땍/"(mpYuͅ` !˕/=xũm*ܽ!?=hH#7}}כ<a>>wOϷ~m]ѪEz jUTwSƃ_G'*Rry$xʅl:ު/nr|']9f)d/Ffź4s6{O-6P"U ] c!^Կ/!$HPΧa`9%wL,8 Qg{9W?s/r؈G0Q-"iZywOpՃ#RHG6lj8uUu=+taa=d4Swzl0v̫˳iu1n"«V Q2-/"/z.soV%ivW5qЎ^J;lhJrGSjP+ž*T Ʊ7Φk?"Y :KFyNZYg&%Y,,,Xp,8{6 3c9yGý4]b12'y/& ." 6$D},WVC0 4ߧl`sx!m&8Xj/d9L>.~Ѐn\o⟣>rd76em]&8 92C̞Ȉ$B팈Ip ;x@))nX<)|)3KBHoa:"Ȥ5HAP0KnC"d"3N mbdmY3s=kc,XۻO!Ofi>޶D~eW^M3t+S/{?w>rTż)Wd dZesp6K&$eIRtd!UeR2lN\;G ޢSP(HO!TғP|eX"goԳ>9.i2V 246˔nb,_Ʋtj#{,'oWF"5^.›>Ab2˵dQ@:`GvّI_χO 1)6 M*v (9-פ[StrɺNȔ槰MinxҽYްZ=I'oj!}(EFԽ-U݉I_ċ7h+0BqLr"<"J'krɽFȘIzZϲN׋vm*ٵ'M$+䵀 ˑ[rE'DˡT$ٻl#Ku|1=-?~BTڹtjvΠ&{-AOJHy'Y!xYm=\.X.W 5h7_,azZb+~O?'vA+CUWf9MG>!I_#%+|voK6%_&bDLQ`$ #p5fb6!xҫћ`H!,MBVL< SOӏW8>>cs?y2:?Xfʅv*cMXqJ 4:}wo99]|mۿYq3q+ 'oMO=^' ᇦf\uk'jU[elm3?cb:2_z~/~ u)?-G?ʯ~n*̦g$NFG?+Q=?;M;"GTEԓ+gIV]D7ѐc銏lP;fĚL_-ÄISw<]e.'-OLd-F`-s'~ܙ`Tjo|bN*S{nێXM.5w;tY qor/D|839.yAZɼ%2[z1s"+bXd}jQӋK}gCӋ ?<}ld)eL%J6$kCVjì:轏0q^@p`'u|$uuzלlgմ[JӦ}()У^XgQ0JmҐ3B9B Oܑާe5&P뼞.ޜ"ϥ)UCҞ;TpIطiGEĞ4qFB̙9i g CU/{!Q}ߍ; {|`Ao_}&C!P2,(KXRhX҂B"8B``6*1Ga#OIJfϊO <"SY \1DWq"gɬKj)ybowb'RaObiJ㰫w +9j6+cIrTμT|Y2sa, ѻ@-I1"&I ؔ%=mʤHACK"IR}#colGtް7x(P X4Eь;ܜ/UEwV<j?M.K|'e `beV  0,I6A f YzHn2MC1|L Q`Sj5 )d֕JDαwoَa2NWL}A޸㡨m{Fm;gIOh@ZJR90b C{EDg9cƨ@r2 ΄X:,iP7zFَHK}u%Ev\Px9& Zb '‘u1Ȃqms6)n#p)pq_7x(? ¶$E-=r78[4aN1ۃ~6Ğq'瓦lzLe7dv`7pp!&hq]#&ٯdWK*qE76Zfk@ukS+co y|+VX}ݶ$2l/eЮλsJh?ġ6TͰ4q_ . Zv*>rE^IS"c^(~1RdN;~pѫK>j'V) ڌ˟6DxJAUԂEaz]̀9 㔻R˿890,],_+e󃪩pT]ptAWݐ;Em%i;   B'n%E$٤'"\#WSVT<^&zgXCNv&Ӿ YGKFw@KwfJr{3TGƃsJ*eȚR,JPd[3,ƷpB98zu0TV(cGB$ā EzKd* r/$xN]2cM%Z}xr:XӖB3?oj}um~Ҿ6ƻ89JB|Dz\_Цr7ueiI2&PSRHx95F!茘9$f~BGE绱 Y҄K0FXa l,:PYIA"TE I{A}a(b։ aݩڹI-#,aɠy@\ D2 n-;Ӱ8\[n 2I,CTQ%^ޑb!fn$$e6 ԯ =Ћ(Hv`E 3)2ey@R3i1pZ'-hJvEb\w5< Egdk^8z9~#GzM2ft ̯x(oj&p`|߸5OTw|<*Eجъ30='lz1p1~IX2/Is"x;~XU=f6(-:ˆb\Š'D[5Ì3$>IPZn*34.i*ئɐ޵$E|KŢ%iMZR=O!֐5$,s=տHs9 !q_-''wt}:OW͞%$9rpc`,yL%hyrPbAn| XNUӫ Y,#%u / ?b}P-u|?2^^cpF3.;b[bT/}5\<S_.}6Mv oa~L.Or*fGfm/\[)LgEy?tvMΙlvZX;yk>jݞk>9pCւ,5܌)` 1*WЉ>Fp9PTZP3@`FZZij` c&Z`8ۋr։9܊S9 []y Vd}woZpt?l4 Tugz0o*pC3wWiH:y^LS dՓP,0ゥU^CO iqיè{֯.b۲/zQfU~-.b@Ңg q]4})4;* M]uK>'O3g1Q=~xaDB%qz8?/V~CQRD >We-gMbP^LBywVAnK)//,3DgWd?t\c,?/ψFkH~r<w[a؛`-X RVR$+5,(*/+-9{+k_5ˋ^ꑵ?%%8@:JT Sj7K^j 0Z>PrX,;Ko6HՐw*:~Ʋ'(vdmdm̖ ǣKŞKQu;a$ɧfpF3b4.&+YP4Lu?]c@{͵iCEPb\gxvseE$*l@2˔>&"3G*V$n#ӁY$^|l/Ve]V-(MXNӈVgs*5?)w5lWv.o,'˪TX<=ѻ:V?jGP;׶^{ogvo{sCG]Wt=zؕtOS;yռxy4)rS/)rYMN@k*pRDI0)0AG?/IN;AWQjֹ{*QTIڛw^!M1-B& κJwWUmd ^2t0ش0R^yS|Oi񭓏kq\ZKbhsqYd$o;5Q C::幉 [߇vo ]~ Ɲ[:h`oi iiUqԫ;oѴa%Bsp**)AVqUsyqݡ,c.$RGX1N[oUrD.mJL2%&[u(wEnmH"Wnә{*QTf5JNKQv}Ͷ>6r(j7E^АƳPlq>iʅIS)b:7_:\rS3 ǫ«+}4/"@9ȕGN32jKOpVUKUaa4=sS?^*V)-lГ^NOb+m! M~e1HOL焤}4Qs_79ERf[#8nl Bx4A)E:B;)_1 Tηze T3D<-@`-ϰBk,]H:m+tv΂Tw,(;]T,h{ p %In{4LCÀ"hV ,;:8 ^Q#5tpP&yL^>b>EfYEnm!y_9ǀ;W:C:\8-v$9+HYCX8xW~>^~/ն- ZTyߵ箍c9kCXٙR6F𰶞)ү+\d*g5V[d:[ؾ na90[t-'TdlXjvB&kTK1 fi7IP%PCltJ euYK=wD[ཡ`a"g5r6#K %1?5ikOW>]~[?:r==<Ĕ\56,VFK^I8M%{x'x):/E`DzMhR{ rkRyV0LVI S*eM)zɅw\{!Q&r= x-p9!`)v-Cq7~40kzxY0Y UG5h||. *n}<;f7t=;zt{WvND%mR=j+nuNΉ rxJEE+j]wO9ltf2gZVԲqݮoz~lS=/\hM/Ǽ Ϗy{GW7tz;^V+}/V?2@G]0I+VR2UdZ*iU2CFk%-./zG`=dE%X5J 1|JօJ,K2(-;ˤS:B7l_7`.HM~?F _''wx)ñW掣3Jۻ> 4/i\PQT;28]|ET,O5 CC"w0HrYsD4`8 ,@&!jZ(Q4%lJL$rA!x5!μd ƚ|_\q## halk:fnMh󓺾mІ[?!H-`QU&VٜDeK= {CvB2a71K -ƨa9l`3BKko[LNEcOε.\}q;2#a׃lߞf+|w=m{Hi'mA)WAbZet6 D(D18R{UD̑G\;(G\U. @T90l;sd[l7gnJxvw*_TN12o+zH!bNCvBxHAT {CJ0nU12 (#l!JbT,VY!V2\2 f{Wo4+\.[~s[`tJ4Z}u^ :* + [,̷3_}(uF w7dܘ5^ۭ7~ s׵frMOؙm<0I#sdJ-i^$KH36C/ $iyJx4Ւ$D3!Ec`bbUBX,T]4F)@+wϕ$y& k-}o^$}9MY/BzlIղZl9LQI3{A 4g*)7{Hb d[Q"$`!g2Al>D {v[p(`hQ/Emkv7~34^jC u~xtO*rU>s(BJnf/jrV!n:`z"3~v)&?$Qח\&X|Y+ǨF}M Ɠ6>wHAOIn$\#?Y"vFP@[|>^f@VZH!nN{-Ys(R禖7V2^~$S'gh9zFu[c4SYNFJĊccLOx̼=/sX yXqa8auѱS7uh!J\f !9iڔ&C+ yL'H~y/t99(<~1NaE[ͳR6SQ~+rKWz"ȻT);eϴDWِs_G>Z%l1o':MhK$+uXbsM 1>2<L'i ,4Nb1^q' ?n j.0IMY Z 5q5mF̺~RI(;}wbל;\\LD1] gN3s3͙NnW<#-e{ÁY 4#$d4,# \e$.(tRlYr:Z@ ;o>L=ĜQ(d! 1)DtZN +s.uzC:*=ѯuܠwgq%MVz/n{StT}thn=V}E\zE' Šxb R`nq )OJ@.//Y)cDڢ^k8]sѸ*f)zڔmR;uT%^`:: %r}yrZd5h7y[5RAOf~k)*߽P!q 2*N^q piԚ:\h8AxgA2%LZF,Z6{;#g,9nKFbQVPJTTں6otۚ0w!7+#3T:!Wq5e(/f3Wщ)mA'mp"+dFSQ" 2Kwo >W+B=}VJeDtQP!0KW4(d{Aߪ bLxȄBw"ܱfr1;nRi5ARGIR}A sNEĈ4kA$h%ĝ u+H-n+LR :"'QyHRQ1&$R2)W -I^_B7O ubE89aj8'IbTNN9s6\-8V[s9b4͍ast*ORE( oJjqfRjT++Vw}\A,Ewz:90 JR3?\W?zM&,%lk|=VQ~ymxvY5⿖GF8xcW}!Gy ? Czqתg%=krB",O\QgG{dߠ*a| |tg+9!7M'pEr\oGԠnQB9a#甇}7_ONo_8ȑB;dЏFx5\9'$nX* iT_oCsъ3d~JQTWTEnci9ΣWv5[j!,Δ/}T˳;nlNܧn , M8g#!?s22͚\WƩ{|[*: ߯W((s5c6粸]e2Fw9nS!1zn}r\ 1JC.%E=aSVa-ec2|k@N8/k HWѶX{h$мUu:u*Uq9Z 7owR*WkRC].(Ey. 8L}0nCM*IiQ|:H2 D޳[KodNVc^UV!bvҾ:OXO͕ˤ8?Yizşu![-Sa zGZW ćYH!:2DEp6VmLJTģP%EOj}njK%o3 clIBy_70(x]VrOoX>o{y5x3ZlLMV?)]ٕpK]Vt;k1{T |y|7isq b*-ݒvL5{ SqՊ4}y'u&@{6`W/UY'{Y©ȑp|q/ '͋\l #W/8/~5vά{QiB3LQ.&!G]WEYrV+̍'+{gW[v{TVf՗G-<6tsʽ;pH!IZS pRQ4~+!C~ ^9~ HU H`4KLX"p х;N2u騁bxnZ7Lz̊U{GˋO {A~AT!+C3Br/^]4I|%N v"-.}Tb~|M5b.Fgnl KGpGLT̢)8ÁdЈ[b^ػPb,IvKr$'Iپ*Ueuw"W `"a^*# pK4:!ű'p$:*wM8qVWXu߀0GM;noq)&y߲]weKw3SpPqߞw੢*ʓFOPDNv{nCw1w=;Qҿ?EOKӠ[=_V}OeO+Y%(m{}#oC!G|Q6%HfӯLwž͇nۭ|kG샞WۜZx$L- L-8L(ZU%o7nqjArJKWKς+UxR{6\Rbh"\`?ϲqWVq W M/OLJqLɓerNEjjʽ` WmzkW8* }•f\ZkTՆo+g؉pɹipr}Wkr\9#fsWg+&8S2iprOxڸzw*pudZrWZCV/"ޢsۗә] |_NvۗmL%D*Wx2aVMUZڬZ2K`UԱprWyRa9*Z ] rN3jC\;TeW KW^tV;$3xz`2!י4 U[=UeW#a"\AhNJWP$kǕtiwOO:ZEų 2I2WTFZZ۴Wic\a&\AcW*7,)JUsĕeԦL+>Lz?ej#WP↫cw"\`W4ɠuOU Wg+f/+xid4R.͂+UV =|p9I4 Jz? TXc6wu &i*xgn"'S>bTg]2fЪyIpWK$JJUF q3~y{&giܕ]l6\!31>}O¤+W?1>})*!NxM΃iU{ ~La 1-)*G?jʉ_nI+z`KO2-&z52īA3 pLw-mzlr$ %ipr̂+ǿW҆3ĕ%#l'• a\\ jөe*)m:C\9xn SP<ɠ޽]2*=o:C\1a WM+,R֎+dhKWZfʍ$P W WgD`9䲛W6q*e:G\Ei!'TndP՞萦*o:G\9?qMwLuWVĦJZͿ_y cWxXѧJ%`5ˉbl^)pKg9iYWf;7l-kD~kN+\~k P[0cKi|éudfZPPUéo3\Lq7ɉynܨSLmZWMo /Y&WNEjөN{Yҭ Wv솫6=1O? 8x7 TnJզv\Ae$ qeY8p4R^ŷLmkǕLnyVpR"L2 T JՆJUpub]L σ+{d֎+U솫3ĕ?~\`g r,RnR,W?AYٽy٠Q\^?z _x_קw^/wA]^ b_nP]{r-P'WC5៻dл_G_:g!7o c~vOQ|}12տvGGa٭g嗹>{ǁOdq69` ޽kXgw/ڋ)}q[;BmKۛ~؋Wg_>,,WTG@QCHTyw~D qQ7>n>ΟW9R~1 }D#?B< gs賆.Ҿ_O/l߽zW%lY2:`B$ΔN Bɷ^=du/wۃvoxrZ~w$7Ϟ׷r9;2Ucw%B7GSRFܐ{.x;&aj>vJL$ݙJ$T!͸`>Rpa#Ѵc? mGr[Bz2DžE.8a$MZȨд*aȠ,U^0-#ѷq)rH,:βP=;J#[d;곴R{R!YAt0AG JǍـѰQCч\*U|lw˽AqT.CmI1J b W(cdC4ckiJtLx$BȠ4P eiZ|W \f0\˫n3b :zSJ@(c3.4g4 a;8z1: dAPBdE4}R8/R-[K=(} G`&#\ 5v9֚tC&;,B`a*Aw\_`kd7ܛT⍁\cPwP,B,A&!J1< e֑~:8˻* 19.>)"Zzm! `syKIF?5ΰ_.C'0cͰe 1P`5oðMVzsBE֊ȐޣP4A)!07:Sw 4tlp4c@TԣjН / ~J0a\7^Kּ/cx6wÅVw.`D!x! BOB{ sD}&ȦY\ajmHvZmU&1>!呀.`L䋏*)tA\Hz+V< Q*Li1ڊ^!:bwtXhr@{: A9ͰB9uU+o cѧ"GuS5ybΛ6ێlzeA4Da6M6 MQJ%j++|ma,#z,;K$ "Q$ >?_Sz`_1h` ?R"ha2OExXKG%|)挌a az1A@& I5{H,i}kQ%8 {+[u$Ko Gz4q ͶV$LGOB7%z䃙`7- e',z͚E{)D&!8OSV<|4vNI{(iB˯ #I& ,פb1u ^AF@jЁF5,mAz |"2[C";G%6v+d8c -b^QD4cnA\\ w|+Vgp0 (,jcDԦ."`$6^?yzH` 4 Q:k#jdRDQcX{u[ sF;1Ag=xH|*gRLFMAaM]CuuZb?ThsBT|hfkA&f ܵF{0E D _V,gT@8 Z6:#0_ 2)i č#kc|D0({*=uG!cIi(u.h⌞4 t̯^PA." mfSNuUuk "l94uٱ4Mҋ!| I zB=)r0Z 47m(?t AxgrpPGW T/J~4kW|Zva&TU\nW܄*G ^Apɓ""B0PdebU*U1f8z۷OOzSGGKN͛bd<$i7i:W~>6ZWK5eޜnB_%zS1eGJzZ/ggRS?r4Q)Tj-eYL7^[ǻ uۼMY^u:[Ə@=E ѧR'J obCRy+[)-BM@@iGHJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%*0`z_%v0'.rqfǨrY J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@:^%֘R=%Z{JX tJ @b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J#V!P:NQ@@"+Q h9@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X JQ}ZodjWN(ՔjݥחMB bλ5"N IN G>FB pKu2+Δf@[ Sh6;}3t3ɺ0[=zWFO~bZZZ6J0rXSi+ww hڇ( M\?V w&0M!M#>4غ nPC+u^%4OCW]O^=<]8ht\'B|+]iv +}+Be8 d3CW =]蘮4)`]\;`w"2]!]+b8Ua(tEh;]J/λ ~=]\BW6޻"qc+Lb@tE ]nm %{WGIW^ DWpA}tE(5/']`L/Nn$X鱍Vk`M_ɴ}L\_wc|NB~SZ(׵(c^3 P|+䥎}Ow4jm+U1mclܳ=WLeJ܌g(|3knG&m#zo32FE=y7'-^Yi4"Vc*UNJkIz6/WfymZ]RYz!T{ݿ۶kXkQ"W[S5!dKx{xE0Q5{"yq1O|Y^c.p 2ޱ@,؊Jk]W&EWŔۊImn;z6Co۝QQBn=&"JUK_* auiA5A凔t Z,-\3PB{L(ॅ#\Z:HeDWPtEp` m=]ћ9aЕeכSaq`6no޸{PIteXԀ?xZ^p0C+BUP:tutL0 1"r05tE(a:BViDW ~n~PXLWCW]5á+oO玴OW@f:B*xq,`6/32%`}"$@4ţ/%X_/==РTӓ_&_,eibgۨ+.M!}GϽ#C~[fu=c]rP@;ٿFj`Zm<*jAENN>zf0ImeQ= ge ۚS3ڠ?\^䏵ur Sʻͭu~{9tywȤx1YLwUٴQ%J֟Z郵֫_ 9&a5q 0;cݡH;@ی֣vW_uO}6F9e{7MxjV~PZe)T;sr^NQML-H~>&ulCDwW.&~RZQDjP5m }s]UIxs}->^Bbw}v1~}ѻȻᤎc~?~((~x;i#طcfA;{4Ci:)k`ρ}en:헯7cWP}-YLLBSkJJ&e\MMuZE\6.xHu㒶[m_7 pE(2*gziN1+IY7nDs5kj`w cmlm.%WY`UZ٩!~Ykhz|>4ovv[+rM+Oĥ)dwWn0]]wv &ee.ƔEDbڵ!:TE[/ceQÑV֩XeYsu|\=zTp 7T^o{r~=ϟ r`<ݗ}n׷ B߾}z]| 4 3ƢFf4]/܍MsX<|qSZu|UoKnHPMr&7[@T*rYxVKTTӫ9Phr~F%.wK ovǣNƻFv6nB6Koj2M'aWjԭ~,7I-g-BOҏݗA~90ϖg]Ȝzռ: :ƶ2T1T:$zx!]o˲~Z^X8ދrX%qH\<zyW|uQN>Ӣ杰|﹂hs^E&b6%%Ub V:y_tUDýUZవ)} /5ҵuFF2[Ӫ_Ku*r08,; R?u}T>˴8I˳GX4%5ek\㶧Io} 4m '`9yY1nL5kOuj~`\L%NL2;<ЪdvBi<'a2{Pr jjaU\NI;#+D\*{#,Dv hX<ֽua݋h8E-:(QuC4A_Fc#9SoE8h4KI8۪ B8\ܤ๪9G mQm|܁>˚g{7W]8; .%6g#K(WMQdFlyB'3#KfYzhfxf^ZGp6!x\-Y?c喾y꾚uySϙmf1u ?x2Uk]泽JZ5>t*'.mۧγ~0;$MVHK]]GosTt9fse8WG8lui-1Vyev~[[|\LvOrJҗ>lE yڛN6;!N#4 !)QqZx8R'dڈd7ٻ6[ ,yv8eoGw ?PP-0.PhJ4(&Q10>ZOLTA ؎#M@]WT$@&U < F¹1*/XP;#ge>,YCmP5^e1~a \]p.E5^ד!4w՛sv6t'ʻsA`8.AF)+3Np')r!Q2P)2*SɣTIS#uLֳ Ӏt($IT9{dvtqơX;BcQp4gv5$?]Wa6>tn8f9b ѹ $69R"c&4t"d;NPFM.C6\d{ a+JDl m;blb"*H :j+rv#⊉y*wڦc6=j6&FA)h 0M!xAKpH+-& H*BF͐ XŀPqD7IFШ\x96s&t! bgq("ڎ#V72zŽ4MSSu1j%#SJG@F.-I1PgDuPBhڤ4UqGj&QCNSrÎ3rv#"r|qɁHbm(,M4VLqȶ!sYhj7I!( cagq(ҎGû1g-~|G-JK4a!7CvPɺ>><}쪸[{s^Q}[L'?eyQIqc( Dh,@{@ _omTDɨh:sV]MklT+&jPix+'PC=qXj:*f8.owj]Q9ϟOݨE˴)<`6j39Հƶy,F@(cݰcn-jE'?j6:DVګTRYi ?J{R/B..g]R\$D*3҃qWB #Asz]92Dг.}gGȞƳ=ωNʇܪ)' 1D r%x "H [C*t@h*+N(WagVޒH}ayNAnɤnR6o/ƻԙ.SIZݪԺ9hG)R>$Xӆ>KLL-7^/SҞg)1GhmĘLfyJ7Di)#yՁt"XU2cx)%*ɬ$ */lo=ǂ/eٗYp'M)Ϙ.h긆d# X&@XXT,#ghELpBdzK[o3'{bVWo.j>&JFM^Rc#-v wۃkEQc8Y 4 xi a+$EƧZԘE%(Y:.1o$3.FI5I&.ۼD"˥F:`G7A0-d85>"Y<"\ eV1'8ÁH2%b;;#g ISBVo' \dp>L Oo>b {C,j>fKzC{W\<2:DC4S*I0$I(8,`y982X*7YRep %6-zZ3&*uzegzg ǹqjW_Rw[fb|nW+YRC:,Ƿgk-3.$B$LҹR$JТ1Tܧގӳ#?9$}!?͵ɯNVXPƃ U)RNnN+nQh]G#ДctJJ8L^p[rʛŸàkmyǯCʑ$)H 10mc$VRɍ`A\tB rCH(yaz`x`pHKE;m3J@T8Q :"qb&(H## ,Qk$ 'gJ:rT 3Βԅ0:#g0z)!\|m|<[8bw vr[njtr?rQ5㨱IdHHqAQ&Bo|ΰ #A4ꁧOgWW>ZG y$JEK *hd9ZjLKhy:A0|BAO앑KI)ڲ.jOdGR t iø^Ae^GE2zk yCkPN3Z@eMd=:R^kz[NHO[އsYΙm}eim|4 $wD8K@5AHV@2:EBzc΢~d(>gruqխٖt&K^O[IR!J!:gM'kcu:>6J' oQכL~zώi-\zeqy^_4nw]q{.z &eJ,Lf7J5%𠢷oF7'Awa0wm mo¼ c6ݛ0 2s %qNєPCiuµfs4sO(ҡ(=⺶>3yO%UR R9W$1 `X  B(J2@ V(춧Q>piԚ:\h\8AxgA2%LZQ)ZEǾfX-f5]t;˔ G\y#ݶNjU^bYo zqrXft7,X£Qd*"{X FFK)W Ę2[1^|2 or]|`yՊ0%OAhftgp"+dFSQ" 2KjIX32%`1NBXJ u4r=H)%+VQOUJ &<`dB!D=QIi %I A%4 ! "ZkA$h%AN^+H/ }rv=Y\$%(;˷Ӽ 0Ƅ卍!kW'E2$\ gQz9 ܩ4(nFٙ}/0+wuQH0xrK᧚_2?Ь\ Pn[Yw"z7,faR|~N/sͨCl^֤6m$.EU]'l6q.Wϛ"-߯%(%R({Fѝ \7*zeأt# ۛC8AY7<+gȝš-cF-GQDXޣy׋kx޶7@w*U5}q41\(XiV9x6,|RFpccffT:{<3e y9f]_aM7=jO` *)BG֧a\x(YS֪"d E֖,*`oGhqpyMn *=j-[nNn$} 8Rì3@&6w>i3:x1\J'2z^Ж'qC|l0,çi҉0``%ה(BAZ |ѽr}gpw$J*Ř35KLxX#LPm }e멧:baA8ت>D!ҜA8_PʵZ]|u%{ӕIAy LyQx`\T&|@H7}q6EMi!Z%~b2Q,@˧1 ]X'S!8$KTtOM)jm]ɭc&S,ݻLqoH'jd B Zjɘd2jpeAn\豐$?Ή\Bhmi3B?t" 2Bv*#$)])$sAE]ϤYGu5i {1w:u7ńLMo}G]Pnlo.W󝹺 u0ΊO.訸Sxm!BHE^ τн2E/+v>]څH7`%6 4qR|坵Ax'BdsFmQ'8-ӭK\Xh/m\]l{HHRUHz긜MϪܹqt} WYR"B& Gf13PU j7mPEDj+6d!YRz0%E[,YU-ׁ #L 0N6DQ(b@hUq @<1 (+Aq3qn^mD(׋p?K WW΄ڞrU܇é!.zl2 `jbz8x2Bݡ"#3EF(.'fWo{Qr0/2z>EF}2W(0ߡ,ܙ,zsR%7W\qk8 y< Xķ//#Y/@^]&f5o;PZ>cF_{ߣiU/_SYiK(QlJ+Bm_) oR>XQxIA8kRnm,=Ź}S۹Az" 鱭XXl,lY$=\-Ӵ..Rp4 WCGǓYVt@s!dնAKު 갂$'NهD!}8FsSgOSSuL6e" B08CDt>qcg\b IJMM$BzF.2 $`IB$@!IMQj6;*yM]~Byu`;+̫EB"S*7U\Bl2tF.5WPлF+͛Ei@Xtح&SFZ-;iu9 ӔR!+3ǧ lE@an.7j09I s5$7z?[&eh2go? $ԝ@QמݱY@<1/uanNz нcruU-"o (~lڨ(]a\0*OYQccX%SLιRmq28FTyZYtZ 4E Y<̾)m'y!fX.\X|S,fMqq }G0;+3g8~@ t|2x5R-Q)`'H_~q_\zBǤ m)Ѽd9@*ql D80=o\G L~OꤷPF3E9@[qFhxtvH~OWC}x4ݫ~moKx +"7Q(IP[Hm CHZgɰGߟy.gsw:WFY PGE8欛9vb^ZMLY]";C8[Esw֝n. .GY7z?t,{#D'T> E<@<:;r},zeCuA30Sƶmg @Иp' Χ'7}%Ϲ# 躢wjDy۩cL -Y*D =`u< ɁhjҎĹeJE!fb9){4LrJ(4%N@P >񒪘<3Vw{іU82LkIyp8It5FPdpZ\]wJ'j !9hՕ_9W훫wS|2ʠ8EB h\Ie...vCٱ== w8gR=rc4(UF1+l9 nH>P6%γ/E.)]gJ_iK `47A2e)T͈̍MrN$dg!͙ m^]hK4-Rhc.wN2d A;'e“A"uN3c6 ŵX,p ڴ}\-J/6έA3-CmwF?s٧ͯ/ WeB艞?H?h`~7-{8Un1[B`DPl-j+ Ձ gm_MjL 85O/u0Ŝwԥ$MbaULc`u! dw4aL$}nm]) iP{.yYoYk5@\٭ߓޡ]N>IkĬ7 қ?U?@A=/ۚGA{>Uho1-mww7]}P#0EAhK`bJ (HMΐ7@)__U(&}XMz= ;W6[p-)zbƗ}`9U_X(g?4I& {Gd{Dd{AN(]ÒYȪd-ZT *Af$LĚVPloh)PQgP%P PL)Ryv>AQp,pAAv&Ζ4I71=B H)ߧW6+`ճ9KwL! 80.&ޅF!1Z*.DƘm62\4Z`1).38]Δ/-z UK+x}ss˫IcoOU Ǟ.=U԰F!PT) fvٻ5 -{$dѹx&vcV%RFSp =~ޒ:\LmԘhP. *돨 AM%*RIJ|+)v>L%%DuqYluKmfxAJv(956],&9{[ |n|{ F$|'Yևƺ3MDv:Gi^M2jmDQP RTI@$vңQEiL&GP4:@$Y:: d2) _˒]Shu&Ζ!5_QIW%H o)gD?oofZ#'6Tiy7eۜ;x؈6`6XˉTK()JH1BTD(%f܆5[1 '67brud+䤬!e-:e!by䝕 ƺ,U(Vs -b}XQd.G(Ȕ: =Ho$".L $ e5Az?IִB2d@cdZ3q h-'WQ(^|ڍm/gg/)xZ5}szϏ;П8UO͂tSE*_e9s=J->[Q1zW ׬6憾~c|MX5ߤe::+>]~\TmY>2FF>FC|&nh6R nrLnAK@pN ~iLN9sFp_^8iW?bw(- WIG|sL™xTѥ]]bYˣD+y\y峘|Ьt]~[&—.n>03 bzj-<@j=R\. V7@}T v85b#I7@}AUszUUTU,!X$Q ;|4^9紓^L"O]Jk=AX)5}, bŲii I1TҳlרtMXؙ8*[摅hUgUd5]0mٻײ܀3ej.bbj@ ӌ."dr"7րldbX$S"^ ^K;x{/u/eZf+D)A "a4iNKIpYJ r0H$TSxip-Q u)%"r,.Yyc46*1L-oX=zײf7mE/&%^VLBիbj?z4k 16.}W׷mqG:ok!_mZCmes*z7=a1O*u*npvu3kr8' .l 9/?ͶCnpfwC7w>gڢ畖/ -}aդlw#:j 9:_QpKf/n{g mjyOrQwޅ\PCwL,yɮ81UJNȉK_'Ԣx;b$}lN0GPTbt=g]JvCdb.DQJ&T֠\pѿ;F?Bagb/Q$ L2 !i="1+&b(fgS$I›UtV:W%) -!(YvR~1%4^ɼO^FyMeR$+Ũ2>J|qO'%ym-l 9`(eWcͧ3yde~^o;Y垍G#gτѯv4 F3 gB4Vk+}ܠ#:a6GsjSţ9: KNm>੍c:"s[Gc \UiQ\U)4Wb ڣAWΊ0}7WUJ-sØ+ԳǏڛçF`\N\t5Wv ի\W+;]^ZbGc\Uib)G4WH ]N\b}7WUʾŮs.J4Zb2Gcp4Πe\Urh0W\Q;UM $tuyy5YwmITw01g8k,4ɐd]~3p(͑5&FdyW66`gIaC3{@a!2ld,(oq8yaw^.{n%E }_}G:R;$Ai%#CLTL#'.gwݹ8e8TJۮ~J`WC=к4A C)9 dkMr+І:z]|םKιh6JSg_NG!Xy&|Rdka g{l@_ ڧC5rO+] )=7g\6Ա^V)Gl܎S[N%K6YŸk'{؎ڽ? WN,8Q65YDgs 흑K2p1)c~pJZF`+Rrs8G^Ql.%I8 G NqKfIT6$]Bz]fΧWy b(IU atu%g?[+sW|(BRq 4=f|@)^0X#s\FuoGlWZ|ty{N쩿*'0\R OgWseɀzH>|rrze ʉD'iSŵ(!~>uSgov|yӀH~%<1U攂yaB;h/kmQu7tvoPgN 0w<$ES"آЭe:.uI7w/'b/C4/S⏢Yy<(o{Jƹ=![`]"6l {l5ytpQX,Ex0+L׈0ZB=uo:`.yJrA~R/!Q^pN/fGҋ]^/6-.Td_Qq~F;a|JRz.;v$4x'Ub؜Q*\%ni.ƭzJj5?CƉ]y6[8q)Uk^RLE6MJ (A,{bI5)=ަĀ[b3?sqL0rC5.{ Ş;3F옅iMa+}J1)_ç&s{J Q_jhjacr.~Mu+a@eK;Y(Mg?ÿ,TRtn5Q|jSDL䓋G%JnhpVbJQX@L#0 iHVhp`8ʑkYvdwd7+-yfj*axχ}4d-091@Щ`1rn(8: 8hˆǘ@ Hy"]% 56 ")~dJ욳Ynn9m|6qۼ \=^w1ƒ=A a'10npf+}ҭ:v pbv6*+rtsT5W/\QN9=#s B\%vhΓUVҦĥnK4W,m gdBJjv. 7\%.n 4WYUol1.s\pk*~EW/\ bp;ӦĮ<g0el>J\rښh$32Wa\\%nnJ\i\%ldv 9tew32SrW Nh]-LkpyNfg,鹘ĭ4M7ӉKZ3ʹaso6"jl&nj.I=+Л.fKDwa'>|t}Tj:KF`LkzX%7gc Jts\*"Zs/t00$`?Obd^ _ter..o^GzfmOƷ3W& kpLfRNWeҘMFA\.TwsyHu4wzuoU6aCz>q}W(PbwvfMƂ%A)G%%XC2 Jcr/V,FNZۅӕ.1uN`@U.:IxuqDϸO`*rqHF84QRhy#(Z߃tU?^_C?I? KJ ~~&4ERɂ̽rXJ崨o@æ"9M/5l-|=z~[A瑒̃/g4'ohڹϮ:|CZA*䤘tbZ=ЮjtS[ Tm.[-Fȏ brR!{8kքx]%h)1AbXgZ#Q!R+ۓ3滒E|\*,]3Ga,~a*6叇׃}w+|0N.>H vs{eS^[@/!鮞ɹocbl~~nO3d? -ljՅ\r76-PcUWx:hZ`S۾sۆ @XI$bi~O/ߛc_adkWM+M\@ ޼]Ѵ2n/ʜO[n7 3$&omo>vr~ 9 [h/gDeAy l<]^Q%]^\^Kl.:70B xͽu>Sf?EbiQ' -g.[=T\4[ \E9b$qN9r`U;a¢mq\6,z} r#zxUS:T*;hsG#"TBx.Q*3R;-a*c{)nua^qV`@7hdFe"X\籖[i1HmzO=dvy {` ?Se{XjŃUM{FTOng+^CJ)Fc` vzrWSe&nP/K͘ü%Rw$t )*J8Y-,$E5,喊+szs9k؈`WL^ ūOWS]p]x^ ^nxeSalLq(c08l Әqe|3}4L_@yGx1;V[!9eXeP>3QXg VQõ \jE<2Q+83r=VLQ'AZk8ng!F aͥpa Ԍ;kcg7t 4}W1MAI\qHpa-^П솨zUI9miوsɂ - ~TzDSSJF pvV]y&4J#gNv(C( .(C`p5Jm֕ ՔhJȔшM$^{yTZ+g3"\$Ng`*H(<@  B鉝& KJ;OtQQ20 aTl&Qݩ@h;cMUM1t 0i#qV(2JzG8ipf 3 -}isOc\N$i:@'nomA8!gLx*Ei3"푒:RB~6H mБڈ#%XHfҞ3IxZ3M &hH0 .t(JZAPH3ccNe L '2\x=qo>.a ,5 B ݤ%x+a S| O>{ݞ<;A@VG 1br99ma#&yK=m[\Km/0yW2y/4 Qkq@Jrb= [oapo5f Y#ak c8$8i-~kUY:`e8:\(EF5ㄹspf8,9h5Ihkda#lr770"|u|N.d>Q[Ou)}C,~Bٝk9ϧGYNzmcV֦)ER6mI' l>K.JOQQz;ytѲʠfNJ"(Һ?mnsgeʕa2mx}<,w=D'ֵow(EsE}U' ej/N!m8p"N[Qznm(BJRɞ |OKιh6<.4o`<1=P B QPĆgՈFD0POD(QxnJQXlJdQ)+19Q Y$eQ(bF]mO1 eO]'JvD7 ˷p3XPzW 'Z,Sy u69]kGBǔZ7ɣih|!A9Р!Ovȓ8ˑhCMj'1mbQك(e=E)F61>['7=>kԵ:~.+-f†d)Q1r!g$a=HQx[A`ȵV!P.x^Idp^fmlNADz 2Io eI \&!{hr,E 2d2lZY.J3 wXolFs'0{7##}`&N(?2=j-vq閨X75yq?7M2?_Imef" 7qAPJ6*mF_okVS㔖5bv47)K&[Ym;d:dx&Mr-ГԵ(o&Uxdq1bNVem'JQ+%6A-.Z-+zF>rI's-zXrMPgsfcLtVp˫EM`̧٢զ#KcV*؅`Yߴģ\~F#O$*&1FM#cYٛj p~F&gk8zU[nQ=f10V2Oߞ= i7jNN__/]|MYq{n M4HNQ^!֌}0J # iq1%11:32h7;~`ϥ~t\|>V߾L,Nr_}ߕluMoܺݤAW䁪I})Ug<]UCPӃ\LΖ'i q}vm N7 ^*'iXa cjE;9y;]_ΣVTvolGNt5>TZNx٣;,RU]gIp)Aб b%&mY$m\zc&8G/Ş///ũ&gW;[pNg2XRm@X+CV%DL1q^@j`*'MDltV@YlP]@糶'>Jp3.pQu>ܟ:Ox ^'oPy1ǭ گhf繇Q$Kmx My1@2P1Y}ۘړ2 I)"RiZT:e(TIZGP[W+tC 20oudQ `m{73 QFGVBpRDb%;YcE)'#-|p (-[@-n*$Ad#qn m49:rm/q7T:嵙n}R{ӓ]T/um v6zw|A/6eqB!WdTZFJM") CRc\h+9Ve'O޸\BT)RVH0H82Uz4cW,t=c3ir w9Wɛ..Һ:xzYspNx@E'BxdהgR4 &+ ,YQr.b{K!ɀX'ufR*dSbOF'!gDMQG&|biɎDހ.vh|d} Ȓ2EA* Yť I*3Nud5njEk+!EJ1E}FXU6fJ{IsR"kpaɢтƥYLj:l]|.ΖU`y.G! ޫ=9X{и)aN9PI}JؠsA86Ħi>zA{ cKB)?5Ρ'brRغt&-jo!Ky>dUY4䬊}p~'\ru&ܹ\p0L.^Ky( 6%u*IMބ$'=$BP6$4Dh)d5=iۮuթcde1<[hku8ܩXߧ˫.Saz[V d)e4Z/l$Cn[R 5:P%wnW}<*sFW#ů.ƹEjF_4ޞH^wmIU:KQSu`le^[VX6CQ'H]'۬E 4H@lM݅/2GsвfѮ^澳d]x֧qښZ"fݯMu'Sz!Se JoQ2zUҗ3ߗj"=I߃ ^* [] pT<9*BSpf2BW_Eoy+%;uLJ7)8 l'.Gؿo~q;isvFROjbX: YzJUa-uV9q&dkwZ%;zVq#P"h?/ J'1bW7c3Vބ^7X * ,U%X>8"ĩĹRݕ*v9{T^[J5#c2_vguv OSoƭbr9kΊ^U:Zʄ,ـA}0Xx43V"FMsFV;|x1t{cjFKwmL1a8gNk~0 P:I)i W+WI\z0p=+!#+I%93C gw)ټmQM V^̰ɂI-VX>!ⴅhw{,FeQ 9Ls3[fp0Vooncv_ڝ2\]1ݳz4<Ʀx0AJj] K9bg^aq.`9N<ڦ]oTpnj0j[%-xyMU<}zJ60MB˘zY Wy ?}z=4ެ;E!~<9^r# $9s\Kr1JgOY\fND+,G{ʕ#C?<} Q#U Njx'nlRp^qߝ8߁LAgG(ManfNX(%7ih 9h?p2Pm(/J˚9P9 LCYK-"S˨]QrQ"%Zڤ χ03bHLjB\J>D}(b);-P2k׫3|N*@i9S*+miqCvN|;VD;,;HAhI'ذ`"~cG2s,Q8JJ'̉eQBQ®h,,WҾ9̭= pkƎ?;ordY!NxNm B譠6qJt\RAVkҍݼ PJDf{>:m:Y>e\n}4z?Z;Ł{ycqi R"n`R2 ԜA1"|=I 00PJ 4PkFF V`Ji邗*`r=Zo !|gIњjLޝ߾T1^nqmv@Ti\Hw.(J[CG\&aE%0\4Шz7|,y7 ahNJ0'H $Y'#6H$ !1")A2y6ҩEW4r.$N.Z&A: )p%xDύqJ4|U?Yaqz;o,5cu5e x..Ôs pS[揓CNUar1r ?7kBPKʳ{Xeb+zJ׃/V+|R83|j/DY?1pY@; {\r-lewx9Hb|uC9)"zI2(_9JjxY n,{ٌF?z2|l>񩘕ҹ$0Kz @9;ƅ hⱗpA"봇I"R)\v`R$.,_+f˯adZ_oTX,S!J]"g"Is#[kiȸ\o$'4%K$E4XsXru.K7_,]2+:/?kȳK?}5Ӣ쏯& үWtg/Cx[$/n@2Kh )'@g@fd#|={ ⷪqI FյϯX}͠7^śe6E Y(=*^HW"EŶaL Ӝ1 5'5g@E LߞeZi'e_}K7֍ ~5gۂQb AWeX=e_~( 7cU. T\gEU zjH1 hqF9.eyN IQI;?(v\kdlB3ٓ57f"CsF%b3]cIVrYs[Fiˮ+sюB9РLK(fڡXwxKevXyv4 TܱH`b}{ROI[*2~[՘kXN5,HyŃ*F"#G^Ksd6{-oK{*?К|#jzn{zIО Ngz0oRd& S"EfFyK`bz^.-ѯ]ܮ,Z{WvDC}}Mycȸf(7#40("J r8 LJ $0(IA1a<%Xqa7rrNW0>\Xl0s8"mn00G:,M8GƇ} C(,ُp{C5h,c Y#T!y "QjaJ0]:U,jl>!D!eaj3jX,ee4zl5X!-5絉fG[¾[qק/86t }ca2 V`:̍ F]mo9r+1%Y$$8$A;pƂ/EYgy$khu>ŞF3X-X[ͪU13֜e kKDZ[Oΐ(oi\%y/P٭|vWwoniTLNuHj뻫md{p^j.w=Zl[I3U޽AEcg@ $m a̦Z>G0 RYδӃZFGRhPΔRC%rZ1zq8#C-`m*$P[vGۄl|ixˮbnW5]m'u~pC%Q'UmdX@-5zUV| S갚ha+254V&lxJ 1B&t5R7pv[t9/(X`qզ6MV{B/I<d P3Rkbb 5WQj;歊ҤB2n.:s" ( Y!l؋)Ӭ);b䃘J-f6'렰fbC-b""NE''KQ(RE;TP[ *sh."Gl@΃)Qj3LȎxÔ0iA(&zSƁ-`mo?LuXws0)9.j5]\7.nQPIwnPGqAp@3 dabC! l0{x \jFm\İX,7kN~\OJ9gsy>yKc̈YZ \_]!CrJ]լ[DdZ.h9S03)j󖾺Rx^zؽ8(C^k<ȱ )FCBAVĂ姜B(NQ1%cJ |aXk|> fsjPt6gq.Jo2h&t>vmY_]'oK|ms| kc.;9pf|o4n{??#e\r{10۠@=dˉڂYF{@R:AWTAu* QwdbhCwV>0-tcaVK6Xqv<ӄ~ LU.Ί@%Z)' 1`BS`B8kB˵Z˄gmqmS}MKp.mI 2v{t},Ho=V sd9;E#u:hdt('L[۷>Ftls6'(8iQeu՘R0P#P\׷ :V}x"Ftv%m nhcrnO xް\߱.)r;en)kR PWX(8:E)'DyWG]A ƖCB|*mR:+| D+GU)_t0^ ZAVP柨Ӡ{D#>}^-u\mWXi_*<SLTMTYq'֞;JYybbXkQUOTid7X =ii8JIͱPCq A e)5maxL1y&Ţɮ#!yKyt|2q{x N(Q)Z9xk@V,µPa%wWy7P.Lvs4xo;Fԡ;*&7v9Ԝdj =hôZ!Tpkֺ@e"I{骍͜U`vPr)KzVŠ&QȘ"EaZv ^ _TUN5p k:ozQ:|kAFɹ"o~yP~ X_ޝ9j-Slj>IZ-8,:.ʢEy|X%B,S+'V21>q#pt-*sUTXn@~͓e[]ǛMָj\;aMIht`JmhGQv6 volbp_@AVkŤuL#*uP&9N!8@UkAJB1$\$"y{-M)hCդlCoZ} 0W4%Q(VKʓhS'ѤHߪH$)NRuN$de))WA9Ysi.eTȆ ]{PM-`?N|~Es-vysܻM2uw{}wus^-ogC$Vt(w,#7v)ygAl>^vsWJ̓]@zu{# gw=`nק="91}9(K˼9of}P:Wfݳ+o[ck*fwof,l\g-Es3sT4tV3. y߽}Z.o9C(0 ?[H^s) 'W;c䓸XtqwE _num/9rĬȭh1pY ;yו(ڬoz~3zfx]/}~mC}%!X4Ih%ޫ_l8!"S$tLMZf>/7s7񽐇^kD`[ژkE FU6yhGi6Jp/|ŧ-N7&-hEYvw?g:g+/sƶmg{?{'!yL1qϙOܢHC"9Q *: @GuQ0\(x4 }o6]=# yXL[VZe"#S\/ Q49q4/X*4 &Thxi;((PwCwFͷ|k*U07*< ͟/.X傇턴^ iyp~gk 狛~nr?>?ĉ34tP]stfi_TppqNwn]np# IuȍbK)5dF1>6GJQ*\pG, 1+THʠnP2Y +`s@\-1mIY]Zqq]BuqfiGFaVb{12eŶj͗ճt1eP~cĀW_ R0AbO]켑w@Г0uzYYCetm72#wM獶l% sVe֨舣*ۑDM/MÝ7X){E\AR1@"+ Y\ȫ>X1CYԆw!WPqG\))dmulq3ĘJBI[hi/Lo6W;]'_1$p0[Hw\/g&+92DoX fZVR{ łjG(2إEQͮĨJBAfL3Tq<4#ftҌ^Q Tb lo K,P|$Ӷgv3Gfg&EI-sI&S r_]U)AFfjX<@ 69{ g=UKs \|lϻt>a^.ֳݣ{tm ,um;B>^-;^w}yu)om+]]{/mB*6 r׭׏8a6TtҺݲ/vti#YyzmJNZv+iݿۗ^zo?||Y//_o} gG/u؂{;:^sy>Oxht߸-9orX]!s+qstdT>l]t'ahCɪCt]$4W +''YaelOqmhƨRh]Q7ٻ6r+*]56Pk+M6Mf'f)IqRXd{lJDIݞKnE{ E5eQ ýF5t"S) c_/*LCec8ۍexx|GZh j%b_ׁ8z$Z% }o^ݤ L9 4!d<1@Kd\#*N '0B:5Jy<;?z=ULPL乷ܔb6%Sd[)bV 5?{ϫ>t0ځґliI(9pWJ+(1k9vHs#{ufzfPDہdjWH`+q.+EZ BR:} \\q{.pUUHiDW_ \aDű̼/~l˘zyYKba Jqy.(]5]tG/K $q!EZep%c՝栞&.WO WwVʧIWp} yFpE?FZpUĵI9:\)K+cpvDt, NgE-Vpdg+N&㏣P$=B2m,AN$D:GA6'ɕ1h%˹j܋6Hn2eDŽdb (yB+t(qs1`JF̀_>!ev,oH5>NvLޟgh%E/M&K)&V"i!)bȜĴCBu@@y9Zyg Jw{kZlOCY孻-5yF%*uz?c3Rd(Bd'B(d  l>D\yg/o,hկF4ʴZJ,z{^D7)[[gSӵoVG0pq{ӹ)pI.$%~jzn>j=:qn~^~7͍"eˬTyɔdOx%eSR|x[x'uj8Vٛg!0/yQ[>]NGt/ b~]tۺurZ|qwNK? ){.X]gMccr~׌B3xvu,?ǭssuohQ|( W:Ruez0 WӶNXX8otkL,/7㍛Z׉ ¼YF|aq-ěn6hm{6*+Ga6ڴ5Cq-%i1["%r)r+|7:]s?-"]opUEbbqp6:X ̌Й[j%]m]4?R&ӏv>H]e SJ]tFmn뱬 t኉=O;Q[vqgݩݲb!jp;8t#\9W/L? d<@&>j!D\=_f4:TKAʆKRJCE2t$gevJxMJV[fA$z@gt&0{tk㹮Lxc୐}@{ľ3q˿˰L]Bq%<)44K @FV>H'33ݎKGidV o1!4I ^ '*zcH8'jzWݧ`[cpPjz*Jͧ&MPOXI'IVw˭lmO=[Л?窨5P@|<%T& c$ c"&ǘ=*ؘpl`eIɆ@41d>ʈXMx~U{:iɞY]\q Y68ȐCL%؆IŁ; elb޸^ l`=.>. Vӎ}*!쇇a[^u7Y?cDžE츃\Dޏ%7|)ߍhOep"@)] d+8Ch'^<"$5k9xr3>sLmҝ,t)bKgTeϰ<Ń5q55sjWsZi~9}ӫ_FG"Wd_㚞-!GBMMIGҵ<W]( '[ j2ҞR=Zix0JEW8O_]0P`2ٓ 50ĘdjŁRI{.u.v}nt{?DOy^@'ZP

\P.S2oO?K?ߧ0y9-ExQkph['pi(N_Ӈӽ=ߎIW"zϦ~VvyFt?)i^+ۙ ]!`;u8|[stPG/:Eڳ*P?%Z|>[7CWy涋|1,/G;  <u|~u{,:?>ujX`6 Rl:-n|ퟛEN3R 5ĥ ̫ ici|vQ7L MG{z鴭 s`v+DNV8Gc`Q Ml`s|0^J'\! ͐|! (CF$WI;C2adsbZQa9C6c%$jrɽNɍTY,HbHp)(yB+t`13W٠&ΎÀ_>={OPg!DmE+RX XOjI ,RaQ8kvmmaBTH3H)Җ_V\'IYz(ɉuّ3i_UG U &Do~^hYǏWM*Y +9ޥxxz~eZ6?󢳺.[Ydl<6M(Xj|m_Se[`79b$9^kοXYGV}2)Jkd JԖF BNctd,[Dl(`y|c=[b=ϓ{'ǿ;N珺 X㴱@}#?I[i{syQQ!kۜ'\Oo`ځr5i+W6U>@`?Q|{c|1N󠎴k<ߌ_t\uf=?֋]_*tVݎkhN٢Nշf[Cf8 ?īg5Cz/5Oů5t %*RzzG}d݄Jȓ+qrnR≟e;"wyǮY|^(Q|љG{K J_EuJV/'7RO=oi+.sMb5 QΟ>3h~XƵHC}S輯yj˚nJ{ʼƟ睍יy2olvOܬգu h{v-My.Iie6RL ]E'/`Qg W u^-MZ}dAwX-[i.U0Fgo ioaЅ]rDF[<*#e"J,Rl,%3Zjr1 Xb CN OD,jmu\n/|әGyJRdC*\T!DvMB6TB@P9 Ax;1|#3i![j%_z޵3xHcB߬?l7><4_]`xE4JhL|*Mu$FMm*[GḎamo·}'mQ7-m-Pv@ZBZsz4|1vMMovQ);H'zйZ&Y&`?=u3=x׻`zT +j+vm,P '28a )@SBLնQ"usK((/F3T I{)DAkoJQ\gPn! [%ż6;>~QƏy~>jvmXY9vy]9 `u x@xթ>-kpU9 w)* ׂ | vUW,YEſZ<ARJ$w;&6d Rxmſ(K^*+'ٯY@3@: mP&ƅ9IOkW<;)WEzb'gx5ҚdPAiIp;]t~:)- οȍϗ*<6;Ts9奻R&6׼&?ש/?;Tn~nn#ڭ6ۣ٭9j[]\4Vj#rWU`RGX\vJDU5]ѭAh,Till| &:jF?Fӛ&|h@f~:[~6¬7vUOR~]ȏ ?O +Zybmg/E\f{=v_h|{ ̽Yd YG\_ƥǵFg`H>cl_O-KIquUk~>^\]K FsC?IՙmTY& !PzΗLS;X-gvFT/Y0n.fjdMɠ~pM"Qe^4 YޯnIv".JYM$)#9N0hB U\}4y;UZ }-Pm; -8H#rWU`wݱ^?.wۺnf\ڼ`L[jYC^dx{)M;z :]ӚgHxiE{:ZF >k~[׽|e'zw*YǷ=`uK[poҫg/[6cEp󙴧?u3>lXSm;n_ֿg[kuVK?!;VpVZ#rg1kss򳚹s퓑jh dRWOf9HGks20MDN5 p]yGBNf9HGv۾D +64o@k&|*PZW_]]4ٻrSy=J{Q|?pX=ױ#5 B:atx]o\,(F;G]:+=~Ɲt<݊Y =^úio!6FY9|:vuo}97}БL-uι]9as Ȉδ=۴fF(^Mgn ㆬ:tĚn>b0,Q  i~D 8 E_7Zq.k9AY55mU9挑9QJr-/ dJyz+dca4n c2V`rsi_!GR1 -86^9`+l`2 AJR#BB }"2ՖLpRTD@ Ao%3L^⢗}hVbPl+!K#1K"etAڳRȦB@*Q7W*KI!o0LR/~yVzSiEc.!0G`qHJƬE6,$C*ZeD@Ww@AjA(-tO-`&7SHP[O"Rcj:V6LHpe xaL +N#W!l4TPk58H&`%yHO*z@ Av2sz&Y4r3._s*F+1e0 Da(;7ŜQ' sGP!!.A ZgW&]Ρ'8TR~>XQ%@ aPfDI !m/Vj ue*3!(@Hq`Q gp (Ez@> Bꄂԭ 9!QUDI)beƾpK`OAoujHHN:ZGAAh)DH 6B ˒1 uH*(_CU+cᛯyk& ԙ:|_m^bB\"Bct@r"h^1FPT֡:t8!`3"v3^ՀXż3jۊjQլ$S`U0Kv=O~ޮly\E4Χ,}Of_ ]{ #B|u)]L AAJ,./o%jD)(ʠvoPJ%LňVCh'Q y]BxdWXEfTh"V%mP0%@'~S:$;a6 3*I@̔Ȣ(M6ɀ!rP+ bEEQ5\-J0l" D%-@ A1.'r(Zsl<M[~tԞEw֮ip5) j@ec޲6R]nGoET^ "AA#ZVH"(H` J['ٳ.&!Sj1=6 uw/qWKΧqsD|pyT/9$~M!0qfQ4zkpSn@(v9Y:Zm*5f՟Fj98)E.Ę4#@9XDjx4L}QeFhQ"} <%2` Ar)1lz(CG3{[юFr9VN 4J`Pj V9$diS镆RnC֠7bt6ـ '͍+ IJ]!&5 ,47H.# U@ :B^e I ^[?g,RuL` t@ QTI1ATO 54'9S19K1sJ?4FzPJQZ^ڟT'V!WT $LBL'kQ?)E'{֒Do)T|7SAj+W>ڠ  F`v_Z/\{!m{y]~Cq֣l%36kJ]%ׇ@_k N v@b'; N v@b'; N v@b'; N v@b'; N v@b';)9ȒqqW=siN ?'P:NKN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'q})siS9KM{vgf,/ώVO0Xqv֘q pIOƸ6<{7ٸK׆^"K),;BW@=w(51]@RO'DW߂2T誣P!^ ]3+vӡ7L<[rV|[O< +|0St 'CW.M:Z;]u7ytW)jt]j骣)Y^]Yo]Tb2tɨ={JEҕBY,ãhnB~Yg^civ+=Ёg'l?cfr->f[UVu;:g;^5?GxvT57E/'ղ%A0=Z髋O1M7'vNm) Z}v!Oz&DK{nY0[۷_l7a@[$垛G(1~ڮ BەO e_~ S7U\(gU45M}Úo>+q`%ƨel]rG~bسj]x=k,N#  iC/OA)vm)Ց_efgUShOBOGA:%u WN$"Q -ts:pXQ(dqD]a8I:4", %b5OL ι/.t9&~b ]JDL j}s,˘R ICJLueOWϐSte"9FmgfD6_]=^G{r: yo; -{--ˊЕC_v;DWvs91]+Dy QJ3+N$Ct ]\NYW >սćl~> ] b+, ]!\Bt JRJ]`&Yg *Bt(yێЕ\P!B;Ue JvBZt JSxAlxg ~P[Cth5m+D{zt%lT33&Pk M#`n:CWt(U*eoU>KޥDXVܸӕs^xk>fw`ں:"O; -Pʖ}ЕCO9Twҝ+kHW *BNWʞ!]1w`++EW jv𞮞!]qf+&e+D+X QJ3+!juRּ3tp ]ZMT Q23+)!Ct;jG3v@w!>%g=]=CRNŮ+;CWֶ!]iƧ֥R鱄0v$Rch7ޛd:DsM4e+4hE`@iI{4-3.ְ k<6ƴJZ`a]LykI.at誡Fy`~[~7A#7GH4B% [fTh IƸnҴoy^,0.yU鲢w/|xs;I|)%tBy g4a) r_υ䵤ߵxK7'tuj[0֋?~;Voޔ?oq~m2E'$RlMrDUwz} [ x>z/MX{JLrxLrQDŽ:d9KEfʻ@A9.W g+񿟻+_ eF BrO?gIuH J\37i녥!RcJ`TNG:JM`6ZppR,{htRj`W29x2Q?-h YN%\ˮ~|<~o;rt~Ǻ7>x+9.ȈdaxV)װN7*0ԾE8;]cŘnz Z8vEnPp>x2CDFtL,? F-kA4WG+@({eB)K'70> t>̀$)G'V ~ c%346fNN6 `87Kв:2z 7B *0z7z98LL '@+zp଎;Lh( $ZBh )Ԥ*' ꖇu€\' x/," 輤V"猀IN2ɁMAޠ:Rt8B+6-x[9 TDCΑKc/${Xu6*15RV*Lyps:ɔf.5&dgV }hcיJbY2PB(OڈDdFe^YCYc{24NNrfyɩ$G/LRLN:JCho5c]l:V)Wi; a[\\7P*f??G7"|;9bg]xsjl1;aZW-Gob>IaqѴ󒁷VB% 쁆HHTyH%oy[g^楳FMH0 Ig-EF1 z ǸHTm^ ?+jy6Ye.=6.ov5qFp!^]NL赭]ZNh2|Hէb-~0#6VڏE;dz;VFC7+PE=[{oH^[x}7JӲ!ovWu8n{w/7<WSse}?T.|N\.{ kv@}s}''5囗 m>4hS&SҹBd ,W}}=So^'6uW=1OSdRR﷧,e-OQO!h}4H,'1:ĔR9zŵSQZorr>Y%7(Yι&"eNea1LeʬbN OD̑m82=#SB1\URB~!~%P|̗v[cxDžQUby8-V&)?`;$ei#&`A#Fe&PBkl<\E! =#< #\qO =1<18H: C#* wgH<̄!Emc"QbZdBD@DTOa6j͙LI@<՞J|YvtLcp&M [;Cy837w;0\YWQ-_Ce:?2-hD2D 9;.RP@Ƈ q<`x!r:?zzҘKki)A \:'%ivLJya4]mROA3GzTZĠ퐽©YA $Ec9JgOz{9o6)}M-:ф]2nA6d/5Au"EwxQ %Ϝыm*R=gJX' Si>k~3 Wgt6Aj<}ڡ/WeqǏ.=;!{7D%U"SX`T-מ}ZO6[<  ,sWU^B~]/ULKTYl#`ޭX^Gܛ:ULٝې,Ū_L~f79t^ <=zSA |LJb?N6IGH5f}տ#ײj'e?ۚc|w>gx.iyJk֥qLqp<}NԪS7R 59{+͠uB7]隉?8!mu0b~߄YVb/̭-^W]U,oeu_odz,=$a: zW$߀R*=X:XnTZ_F)lCoO/_\$ͬE4n-{>FʗaTt} 6yJ;17<93x5WȻ(s p5MU.͌_eG~O"[snvHjIdOK<^zbKܵC&|Ș;:f#C]T;nx#t9&&낏 ޅ:ګޯ0(S 4w8=rW{Z8x4npD>sNs˕3;޽7AvG50_bw!k5/~JVE从2.>/~z zUV|}<.&ևԔճB8 ZvE?{ȑ?%Gꪮ~ 0!@p Zr)]S=|aQ)5w8$kf~S-s֛~w'M}Rʹ|Ȍ! Fۻ7<\m~LذZX_OCh'wk|yz6YA]odяh~ܻ`F썣WhI{ɣ(v,9-Ϭ(=@JGR ]HUBާ7&#&#$4KSZ/y1y(e,#T0!A&ޣ%n/M}$\FPNHDPG`'F1QAAب@Ѻq(y*VkU#Laå:af<>Pi i_xWXtjdۤIR.l;]+"CST~|]AxJZ(!(R&z_!"GZ21MQ[SR*G< T$ CO6rė |N&ZI|ajyqfX3B Հ;/[l/+3ް*x ó86 ϯg7+ a0B EEcU%RHhSPlT 3ee! 5 XaSj5MɠKhG 몬Kbmg9#v8!澠v3mcԶj !#)[!ɐD%6cɕXxW?S -* qe65)1#c!Uk9aԯ*0 "6m5FD7  zUUy9(94噾ɇOhP $Eu9)": BhCgE\dbR*΄##if(!n/̜'Qj>:yɖ-..o7x|Y@)F-s ѳ: r6PcmK6)Q \6D(Oy:u :@nD/D({mJ^C1BI^D >⒈Q8L{L@$2#Jnj[۫GjTP .T8jUEZ `?4YZ >53qJtmY Jx#t$8P7>J#>~Y`{̀_6P7A0]^udlɲ.DꅺArJleN%G`$G`?$G0aPY;1!xʾ T_ySR`D)PDjAu~SA&IHhHE ͒P@foeG["\c̜'JL="/.>ZC\1urqqΓScw5{*TvJ:B"c("I298ֆWIU s|mv*22b#2O` ZzT%g}xz~>/rڽ!oS,97CdF%Y+Gw:]=! S%U0)`{kZRgAH7;&nbK4{.gN쑜ُRh VX84FHUR%F4#ԭ[_l/ǸtǗʆ͵u=YCYI79&-i4D B}I NbDƄLZ =cYEwn꾁z5:rEuk!v}U. .̡bJfh~8ͱUTSU3EŔABI0!ĚծV5ѝn7Z*w6,4-lo{Dw pdvpI7/mҭ'gC=}Hɡ2UNZF%HFަ~|?;>~CvȸQ" aV~_-0Z$\턠QIsQNfg^ummMۣlhݼ3SԆv|Z)JAJ- R3bUT@ZXXPpF'r!Je Q$Io`]୷l:eT0fPi3s3t_ Ւ3Ѧۏ+|nۡm\SC˭o<鬿)+a`?N`~aYMMf5ӕtIM\M48{ӯj;|r察яg(.~j`#mWI}_5[Z5Z౳>X?0r /<,>U/0i77flI4at9Ě{9UKf?e-TgvcO֩v;iM$yD O͎7ffV?p2Jg~<NjOj-$E`WI!RЇ.Rztt qtj 㑟0z4$(]|oEQ 9\[s7J%榯|#KWD{a0x| ?_~ؓx>UO{/~ۍ SZk ޅ쐋*T*%.s['7w, /a4rX;б?k9R}W͖袯f ?-7׮sqџߣ8<|ʣec[ޟ'7&y=*n7/ZtrՂSR'0(3+80׎VĦGoa-[!? xk%Y'SBDsD,//V3ئ֝EwҺM7Uw.V8  k k v&0;U]t~,-ۇuV"N'}ڠ@LVy2P( \WPLC@jcn+t5>k~fWFk+t;2D).d J:dy~-TB{7^j9jb¿eQMMm0z_Z-oB'{#@z(~KhrQ$3 Q }AW٩40۶Y 23OшDD R LoR(&$cl0KhgPeku .'~\3[\^ 椣mZ(B@콵 O6dmK!2'!28 &w޾Vt˫ ZI7Oy=w 7ebo틻C]|#iF3N#' (Rӝtd}I 7HO|uB"{gDfeQxwY-v`ZoĴT3.|6wi!!;]숐 fI""#Ww{ו+JRJ1ij3'Ӧ8mUIl΂DLնQ"5Mɹ%6ZŦs*6;] TɻȓJ! =y,SrylC.%SN{퇗yk4]tM`&UM©7((K?\Io+QRJfWߏM/dh1}FFW(̼t%#t%ְM!]YFJqoEWe+jrFĥtpthb+RZ+dYu@]1$-eWLd *.73T+5Z<}JchFWm(Rک>G쪫*P2%])ftɶ+uvͮ+6FiHWNѕJ3AΘJ)続芝8gfI_C̅3`o=xp5r>`|w{~-نM~ۗ˿mvwoSݞ?h_O.֧¼0D{&AW}]&gz混*pr}|v<xI9O0s lTaz;^_] O*ۛ}ryw(o532H>9}>w(_SWF`߰3&M;&ٿx(!5J쎬X?)z]5z}Zdp5uSHcvVEFgTbrgҦ 'oKǶHQ>Эhv*n;3+o-ENj1MJq}36 U|ߏ>缛~(`ήẉqSGf,&U\uئ''򬻚=UWVѕN]jOQV]-QW`8`iGW;-hiQmUWϢ+y>UW&d])mJs(O+(>5+Gѕ؊6>RhV]-PWZ 8q;ٕN}Zuʐ⪫*xRv> 7ۊ^WJɫ+a~Gά;DK#]#DҴ>#d̀m枝Nu(uLMcjinvd뛹g+LjJ!Fj)p2])mf+LawOF,uכÝzy8کt5rn4BWicҕoFWh[ t])%UW ԕwy(:jEWJu!Z\(l'>ׇVt2{]ҙjJC7ԌVt+UWKԕ1& ɮ+Ji]WJ%*$>6+OM+Rs(UW ԕ0]͏y5Q8u*a'W(w0W920W0vx~'ό Y23eTs eENlC8fn-(H+6ŹZ뭅ZЇcBKX|;wBW]6?w])u];?950~~WW#q㴋:3Mv5kY7ݵd䛏GՕO84q#i':`$+KdLhHW ܐ7VtZוR Zq-M,5+ŝă+vuŔ!]ziFW+ ΥJ)zj֦ҽ+oL7r+-gWJj 6Ht]'O2w]Rܚ]-QWPK8]wIu], <Y̹3 $|+J 9Z6׷l3G=jd2n=cƵ2= 5`[Zt yo-(nlf1h#N((zkabԐM;(mfN%8J ٖO(\I]6*HWMoO8 dj]|qS?0?[pHW(g+;BWvc{iIW,JqChEWJ;(e~몫JoOH'pP+RouίZ\BҒ<'pkEWJvRJW]-PW9אl]hFWɮ6 nwH]CL ]9]IWJf?TJ^'Kԕt~nHW R3V3ו?TʹֹytA~vOyƟS OG:Pλ. mG|sK}>e0>S/Rldp_MLS+Ԯ+՘8А)Ԗ}%CFma|5KV6^\VIˋ~=}}uyoW3\]rh7mT>կ~x?|w>v֛~37f_8کGg]rڴ!])pft\g])-J)vgѕOe=UW&jGWkC+Ro}'RJV]-PWΊShFWNJi"uEmIWQ߱iftZg+P_u]>E)]ל]ɯZb)W{Ջ}.(1APE>.UY_w{/U9t z.6A]~(DwDј7zk\ A;lvÄz~}|};Gg1/q }_jϺRgϻ^p5zͽOJsA(ozw¨zxտ*rFu߯GGTˁ~85x#jϑ+qw[dO.+p?,nAewOyg%l9YJ!>{mPgJu!f eQ|ws?,~w u ׵o? uy9׿쐩3&5EbJ^7 lcާkG$[raՙ>!T!9{ケzɥ.1}c mGPed\ُ \G@6R/`F}mL!)ʜbrcJVD&HJߡ11W(Fd 7ɗ ņRr<67o޽EJu(/)W wW\RBDh9{bk}h@f䘃BBݸa 2:&35+9 ф`v|=oPE^.k eІĉa[h=lM+b,]ECWya2tiP;&A#FCItSR$~EE6QbvޡCH TL%KԅD t,̓#1 l\9S1tz\fl uQVRaݜBMDL:α&1c3Z| Cg!t(քjR5cA~Ɉqp,j}$O烰;عsM/9R:dk<8SG=7gc)E& A 8j͒b =W|bt  SɂhCj!9$) R0$x{-Tu۷> iD€o 킫3-'arR{,C V%,B2(CGHFK+`)QmgGih^=\C]1+N<:k`P:n}?b&Y]좗~#,G\!ۮ; )(DŽPOX;*"Ra\=dlS4b htLX >0G Pd_uYf0w=R!b啳ڌ ,2fҮ< z،T ]v2Ȍ1M?8bbHDŽuH$,D@i` wtT0 `Z>aճ` G`&SEb`uI8̤jօT>d=p(z^$*2RH/Aah`8B9{L،!! :oa&a֊CUbA'2g >}% kbA\$c6p1CQ*݉!8!?{Ƒe 췝 .2?,&$_f 4);E۔(eKb,Ol6OU::ݘ&}r/LrnK3r׵VՂYPw z$M"X 3 o b9PT8xi?olJ?l:@GVZLAG=$]I!iZTUFB1L: !'`Ge %tF\8gP4I"i Yk2!`(`hx L}辬$kIu u<o o8tXTuJrA#+PZDbYьmC5YK1Z1ة`龵tyAwO]@,M$C.2*c+tm i,!zԥ)G R{ȋ9(P"Q _C݅Zc=$pmSQAᡔ+~GYTtcV-6`=A;+[t e5گ`Ewmlh[uT$Ki n=tU:9 6IҢGJ$oPp$JHu6tk*KBq$RV<4z&fcta#[Tf>Xe5n(!/[t=$5\Qr#bs"NJdt  J`Q Q%fdi SR UaÖ Vc7 ="($'Mm2X{iP'7 B`~E bX0j`R;JQYTPcD&7cQGQ1İw:HC[sOҕȪTqQc@sjo6i]0s0݁ bJ3i^& &ci@Zv!; tsr:-ګ-&f Scѫ8A`` aвMf@ =peRt_H O34%0U2'ʃ֞ +JOQ!,iJ ]\1Fn5 q?XoC1jEro "&b9\Y 5iU.'"SJ,}褀%a4QAygjp N}TQjm~7{qrlRIwҹ÷]k ]2)Uzr۟~CEBh&LB%F@;onoWfb#)6&R./Agvtol|34jzv 0 ttkL'/Wk\l>_MOO'4B45H oXѕ޵YaaɯVaJ| mz.ghz2ak $}'*EQS#xW$S~DN c\$ ^SlBH'g'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v:HTf81h@v) h(;^E@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; @dcrq-h:z'P:N3N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'uEN7&'PD qnpqE3vNH kzN +p@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NtkɆW?OiA[MjCq컟bx(숌KVZhKתm8WYwٸK cz㚓AH3"j4tEh\t"3]@jP#+ǣnc+~(LW/fw _%0 B=?:wM~?L.ͩ9\}v,'RNuZq1ov;a͍H\CCUZԼu{KZ8Y`z,\ ?I~ |ӫuXR߸_yRâJ bZtسVt+liktyv9~AZ*_1BcI @w+ҕZSVrdfv~]]Z}r']W7C{ٻ&Mߙ"bm]hEۨ[xR~}{9GzF۶@v5A.:1RHqz8}8Wn4VoeqTg]t%ld| KxA8#K^|EΝIu1վ#KwkQ'G)#%N%iNe['{ +YUJy|> J;7'b4K wg)}iPjK /pi!*nDr- b,tEh9vʰs3jtep͉‰WWt ^<󍛇ya(=.2+tء -GDW8J)BWc+BtrJуhĈ- ]\BWâ?v";1trJu]8bpc+BԚҕ11+ǣ׏$*;]J^ ]Yu#+6^nc+ NEtrY]8\-Gsg|g%=eig#1YAa"Ԙp'.!{-]huN(r44Mp M?5;Bi$ iz~3z<85c+BR)*:=ؙ QW@|0u~*gN.Û_4=\/W;i)'w5KWSu-&gbpNvg mɆ k^:v}@]|p@ e@e \OtYλwDt:6S!HY5DI:7o+ k1FVJ9m{0 nog 7گ3hݜ%Ë$!}Aϗ~Q'by$#TZhPF',r@ośbw)`mOob:߫cʹF>һɫC>a>m9m3[/߶ Ƿt>^]88]ܿ:4,4JnCYc>+^7rykl򖚯k8lK'ZzϠWZ`,|&N'W)Dϖ75i؁X[8bAҵo8]@ TDFo)G~RBjjŬ8jEa^ +Zv4?r/Z8t Be_N?B:e~~ȿ>Aicvu 5j}tg@W)TQtLdU [b]ubV,lCĄt{%׏`+C᳏wp=gݥ dS.fXe%X@~U -^fmM1WI.Z%Zu߫G .;ԎhHLO܏ōLVT׺i={*+J]7W'˝A9ëg'qmɍ c) "0ٗ 4p$E$fß?_b})bRaTFeډa11=7kNK/Mv0#LR[x1G-9f#ƆƖ)?Pߗ־ck%m=ƠiN/Dc\uv˾̀FӮ<4:( ڲFgoxVF1i hnC3H3'46判 FLK% OQ)eV9Xrii MG8m+ ɕlF}{ OohHG&L1eSu%X4AF`'h_\"\'6͂3gKUJLGt9ZZJ)r2hb1d%a;K+\ 6) * 4/hd_WBR2h-\Ah>l 5<މ{h {}!kgF渗JU7l}+ٔ2^0¹4 F@ÂmH܍p犀cQ `]<! J9IĵO|N"DۺVe)$X:5đ J Y:l Z7YeE'bp e2ʤ8I!ts 'NW G]k.n _&N1k7H1bJw֢]>͏f}k -0lQQ&q"GG14#<rpR^AKe,b}, dg.ŕΦ,vmLVfhS1ئAq0B+z ѓd2#`l9ȿ!Ky">gMA{^NRj#:iN"ÅBm@.||źFwH{G WUl}4 (DhץwnĠb.WhuC/MpV:j_[^vH-jtn [?睖r>Sw>fgúͯ, 쮣<{M=[jl,.G~.:}Wsu'͉?Iބɤ?j_F\J3l暮"cr1ҷ٘M\ g٤{ґR=^Qm킊8\x%ƤTn<8(t&J.F..uұ)zI &!H L%-PN Fd!%cN%O*jfGck&:ybqȩ| d&Ķ3n2kk%veh}l/j\ Iv wsO],eҸe3%BAEolЮ#D)C(,JGD)M<# -iSbd^<@CV&,JAm)O1:VqBCD4JJ{_Y\q|ch>2?sx"t&8R'>Nl`!]f@wi ?Y(l(Xr!ڠ4hd'+P"dA?D-˙g+<3B0A1JU:%wI,J̅7Hdi[ 3_!VH?e,>} vNݾav4sEl͎cRSE暔:YFtBr˒ I;b2\*D핪[#s MCh/uPRm('VZT92]#8W< XU6%l襹K~bٯWi;aQm"}>?ڀ$lڀ_Mm@,7g;!$/-h:1jM,7>nl L25glNy^G0 ~ p,IUt!)k0d FZ0}(*pHF: dNG39|4$:F]Ӌ5n7-]ĭZs[ަl"9u(Gd^ČS"*H9e <φGD8 "MF`x`H Mq`D X#wڰ$eRY(ocBWh11^UYE̐`ܣRhZR ]6pϤȠ8>c9>`!~\oH?2S;$.c>6,?c3RƩ;@Fd(`1g*EÅ\BDp"ZKdF#rG =#|i#53ixx4ڨ*Rw"m?~zSzk}b72^U?#.ͮ՝mKjX/i}pCquqߖK_Ê/mUHFU[^jRvM8Hm^O%[J֧k?i4l:t.Aaa+Iᗟ cӯ'^iJI{c2ϓ?|/>ɝ[Э1s9˩D2r9x{ى L3ќT<%&w{=Yl}.S-َ NMa5]}Llf}@?R?emV>rI=[?W qghy3s'?ߤ~Q32K洫3|uMY,?Eu:8n&ʆ]G{xs+Uݔo{rGFh`=&Zғ-H\{A3>wꔹOl.Gi ۻ&ޒ+_ku޴@޶f%+`sxtY7K*"{r"Dy΄_I򭷞=Bw=Go{}z|?7NB1TvTߔ\羲 |]Wgi):tU@+V8n'h"P"u zGq 4j{ ,BN]sˎ_a-8Zp1Q1Z)w2l OhI0 Rی 8DFKLj`Ct|V+h_.ߟYAR`%hQe0Bɕ,(gJaV JlGЫzeYuk }k67n&ݠ;qJm)1<$LE^8xz)n)J%]{ҘiЦs1"%Z<6hѤbZYP\1Dkl L:`rJAsmNsnH/htXUjWIMmN(Y`0PXTց*5>ÉT4p!| +`8;Փx)mh\S8+}!F4U *$bTZvHx&q&i࢒y FRNCBQ@5 pvt̖D:D' v`+ͽtݔS6g 8=zEέbԺBvȶ$R v -4R_edp&Aօ?7e]FP78hJ0,*^^(s6>,Z3X̿vv66o\,#!y:B[*93߸տ,Fer? |g7 dF+p^͠68s|{;^u@U:\^o71.xVϒ񇺋#oE*AeoxgZȮ)P;g{~|ktQ7}$DaR^fɬM~To/T"v|*cC2xi"כ7€[QЍ θdbpFKe1ȦS_Lg K-o5 9<+y1G#^r喟B^oTYm\Y Jf=fBQQ-v8Ϯd|¼?Yc`5Flb[4yrg"+q}1wڧ^hIs}o[4X44HQ65~O,i+W7=q,Z{S5~oOuT\g5vwǩYڜ7rIrd[&9(V6'.9ʢ,#}T:CP'x;nOZ:6i={~qSYL畩,4&^ep:Y'K/uG 4Uh6OVM*o 7TpkkRISr#>\Pr^AYvX{H%IpW.( T偞M=*4WK+(YM۾#imgk-'KlƧV4p9{zE[0eaWVoL}.PW Z5кG*L-ӥ,2ǽ] 9]U\A%K qHUYk}PIU"߷]5sP(ΏGK\(To ) Fn.ۆ~>6GW?Xjqzzl;dq^n-[ o]|t0F[(4E&>x8i+qekTxt{^K{ 's 'y:aSLs)Ts& ;U!2^H>^9T)ZX!Q (rto 4p̊`U:@":2q2~TG,Y|e{oI]=^ K]P/-)֋Ȭ׶}^P_29+{ɻ޾l ƖX<]ǓNntx۝~77z&7Ok"w YiRI*‹L㜦?0P00AZVU%GFdDa2JTgL*d+.n>AJmЉsG &gDI)n?~ ȓxWy{nspz3TQbUcʬd"Wڂ&7…*^fp$Y\U̘U*UՇ'gmLdcqLSmc.elw}}RXh[]&v.֕#<q;YNh-ʉ\OR,7 dVT ]ly ;~$st.sq/9-Soa#~.N#1xR$vnN8|~6/:It`O.'Տn^yZpZKW0poy1W&~%Ƕq.<rV{ѳ=?{p\ehG渒 fЇ~c _󋼺,ϖn&+LJ\$_1zǡ挑u\5lkWlWԚpLaJ7qs 3 [n H U:=:+ɤ:"\`Ǣ,\ZB\[z\uW4D+W$WXpEjW"݅ɻB* H1T:㪃\SKH.XpEjuvRilpetLAchbWԾBw;+pdGoY SN}Aɋ<:l~EH$L _vE\QB" VŮH.cW2:_=WpEuM\9y sp;1db[]^'jsO٨Wu ݨK?p`>$ڪ.F첸QU_pcD`~ɨADo7sK񬾫KO? B ,^]=9( _9p㯿L/zR)eBt겲J7UV0g1t|헧|)OjUP\zQ^鋃L*ۛ1Lvq[) wCtz} NBUl[(L.ױPB7,H#]5` hpjKP+R)t!=0 >yqEr{=WUZ߰zpvz7`Wm$kWBWU-᪙JXWվCϭ\-|\`x3BtUq%RD+Y<6\Z ͔3+d4q}h8zQ~*ܩJ VDk3, dLI|$h" (W?=(@j5Y Ȃ4-H?Rx|˳d#y~^@>e}͊wxq9xRX:ɳ"E?rݼ[5٘n[#h0Kr5ĂYRkmEĬ3ʙ InD%2\7-e;o f|t%Uzp僳oP20\O޾݆UWݧ Xٽ ר}iD˸u(eg?G.>]e{ŀp-#8;#QZlΎ@W*|`ww,S . z-4;f;꾏R-9#B }V]ڎ!]1F|N{CW.3BWNWYҕ,`!˓zcyX LO:8:&PGWVVU ~EH(*Ypj=HM+r%<]+>D|}_m<~z/^^FJM!U\[@?XoiЄx #1䔴Q9Ӊ C>)u 'D)9bp~џ~}H!8QPNDpS0:硿$oSͼǾco:LْKQ],.PhT!^pVq g|VҢ>d 2_޽CJq2m"|^u˒GOɣTgʕEG7Sk~ÏC,?cؔ*pe [!7a6hy_xm/b,_r?ezqν,{'ٰZepKqƣpZ_T]kzly?-Oѐ?ug16+o韮\do土 e,B<;qq&IO:'wyLH7RO  Z0;k"&OF+!2p-SŅ2ҨGOCL ˴YP{*y8 >ͷ'qe TE9M"O7|99{wc'W<<Q*g?Ïpg=SpCXWj]@84kV3nuVt%sygLݖ:aSc `v]95E!D-~TFjq))\b IM8mmT%3⩣sNL+Mxb-[pقBdYpE#s0̿ Wy[yٖ{u A)DEd>)$NB.x`49cihu{KerƁUR 0JTb%Z3&–*y-ei[aGfdS (,I((D" -J퐳ّ{v]'g8y\`|< *)qQh;ȔR)85(.kJmRss|7tJvs(1xOnA?+|ٹ-6lX]ã)93mC VRɍ`^@Ĩ$%`(Q!fw@$A( \-J0JPOT$rEbj6H!*1P-W>d@Vf Zs&qrr$0wjG@>,vh -]pr9Z_ZvϏD%On9;0blߌM[5<О'~嗮HBшh@eSiq>a1R.lO;3LOB'>6RO<_pI 8S,%+Ks,u&Kh)'9I |xX  tfz77LbÜhC8d1t;- f-6HMZqUīޛXbUtG|~Rlf%f߫ T.jr+W+!to"(OT^ }S-GX(·q~*tӓo1*{_OY9WQ]:o|d`{."X91Mz/_+L $A\a~PI&`qX9m#y~6G \u[!ދ'졖|&X}U C`9Vg\=^U9AGapύ )uQZ#}ױ\qrm(1ꧫdy3Ǖ`/\TˁR\)(JМ,Jh|UB\&-6 =@j)ݛҭYyrU*5BM~yza~2ՋKYZ46l@/M6eW ׯfUЯ[ -1b"^](%䯲Ԕu~ut yMޑ/nrʓIhG^ol9{5lg44)f|dL;pf1AW]u jLrcayC<^K` Ȑ_ɖjr<qXn( Mަļ601eUD6pITXq?h Gophscy6Zjx*0'u%", sS03HI+׽izcCjig@<?O ̓A(|c( "@> 4ra•ANH#ЧE ҇F[^y[^=t]^<$BpjP_1OsbZ$ ~,K_eP݅AeBaZb;8&NV7ت*  C_ߗ#έB0J( uS'$(8=ؠ(ѭ22:* 8eH=o: I HHRm3ckpnfJk\ȸ/ꖹPv\4 ,+ 9nbsE8;P`0:&C`NI(IkF&#BdBJ'+K&)8pBݨ$$XMW)4!sbl0U6m-873 Ê+/kYt)m#5K^QRdQ*.HPD듈JJ.~q~q(7=)Αf_2knӖQt}b._љ2_%Vɛ4ݾ뎸`zӝ|aeWqʇLf aZep9xFYU InUE|i)%@d.H}ukg-g]ʾ7G]}E|m%* ;ZȟgS7 Mu/yZ[qgkQ!Z߰vX?P-t?Z~chW7X I+W|`t4/硥`)&|rzZ><6Ngk.|+d 5>^;Ə?NZ&(5:5x[ؕbx^ f叴pPaSscfvө-;jh2rB)oHnj'Ti7(T&Qo1mzcRIcDBJJ[~qZE瓎bLxˎ/fad{Eko:ꪣnw&*ijxJ7%۲y[԰b[ 1~L[Ho+Kt֗|LfyIc+msuBNlsÂR2#lb-]͒,nUNL+֨uduB]&ygl`5>w}= Ŋ{BSPa"+#0 "ZxGT|ZH:9SH(Q_D+O55=ΐ($BI%ԞөhER[-6{ֱ<R:K(2 ٜZ ayaDpB(FoRR-yJ =Vjn@Ҫ,?1Irzj-tQsdAgYLkN&˻]_zwsŬKU.']&{(oh۵Ew ^ηwnyۄŇn͢VCWcs2~ ;` ~YYo NM˺bڬivڡ]_1^%ǷLOb2jbThN'T賞4]5u*6Kᬮ+-Wqpkv]G|&|ё8nTiSJ_j:p6ze(wU.C,p5yp'*ł>ijOr7V; =GI>(8fJTG܊E_ԲdU ,TjUê硝(3q|}N/U>i޿mǁsZy- ٭RLU2G'=ˋwu-*6oe^U?sVXVV 1wxj6;jpG.8O<U[n領7sa/%zy}G >nh+]sT/nbA|-(vN8 u5& > p6(a"#iZmQIB Llӥ19AQ.y+\ͦއ4!q;i[ʿ]GԕaM'|^1xNdZP}̿m軱tv5݂jnո/vc}_.)]-I_)Sb-bϛʼk;U-{ ?Bz^{zrxoc$dž h JH˿7yDxOAxr'ןd7!2)[2iHRE䬓.y!%`e(V=uc8 hۍh_#Udzsi e!fRT6BJʘ*C1js_bLPBvЕH1U^I$ ۋ.`O[o|]qigp`bo;1y`zo6-e)0j|Z*ڳw}ºmiI (MQ_jA48!){H!iCRJN"UΔ"c +)8l2I# UVNB3}X\qR' u!!Ӵh9Y,32 AFNKlgd"@A E 8^ B XcE,cKF][KT ?uDft_9`_@f[bWF:ֽ"_GVq_ oN5 PC[Z^u*NɁմ3CbM2n%mJݝG@ SjD0*ҘXWb U9_*O%cJ;9+-kZ30J:/uXvmn>=^ٻ l@ uQe& ^U-hJlJO>~Oycr0~Kc2.F_5t> 'S~FTD~R*6:xl|ȥPrZ16m*[~[P8rN$;@QTu[gUN:ٛ(l&㶷~mQ`[.}<;nڭvk~ !- ZW..ׇؤФ˶,V(6ZKD`%m EaU v\5A_s4ZcRp1"IF.:C^,# I`JG/%/A,Kΰ(_&$y,"Jer1M  Q{%#c_P0Uj$殞;[Nl,!EZÄW_z)zߴ[n91 )*X Bh h_~!*b7p*<0y;MP5r  F(()Ԙm*LC%)Fg%SA8A9h 2b!c`f}^,d"į3hdf\HYqgo<;W;ggz=T7ޜdHk/9HC2/Fb^r"}āNZ{2Zp,% t@-3GtC0:)Pޤ"L$a팍ހv!:;e݃Pr0c]LԄ(n sA N:[B(^Rda C?rg{Ƥ0ye<JH+|H2$}iemlH66T7:}ݶۄǿ?4_&˼p=EluM4S+Be84.;L9=Ho|aK3KkA*B `|##U L΂c>dh,01UӤBdUJ8cP3VF-iu9jQQ'Ɓ_?h8o!<|昽)Кc~% eOLzfL̽PbN*ŒWb.Of:Kjqu BUM4*04]b# 7n)ז9-%rp[J䀶Al)%EgGJsN9䉧u+J`m g8IPXv-Z`6IBQx-+-7po< C7zHKAWŨ;AM{ 矒D~vZl#|!4>m,UVq-UZNR2 iyAZ^-o_sf>|`0E!6eO"ec4hX <VÊ[;U;yTW"[d@G.0VD,RehSuQ d~q,Q BSy'Q s 1z@Ξ%S%P"?@&UyNs__|ӛjl=nnw_&?34;>tF=hS+Yr1?\j֯f!m7<0_mY&TLP8a놜O&'?s@g :,_˒7FbM޵q,׿BSx~T b;p/. rDa[7>gY.ɑŵʆM;;S]}TOn_8uqwdo,R;\^׻xn%q;;ir㹑_q|mh!hߚJq`]<99~N&W qW=rߒ، rqԂ?+Xqay/.Qt4>70̇O/&GxbsiKWlV.^:cmpJ[ὧ/6:WW˫yzqMoپ{rxC\9a孻v[jyd;o\-t{Ѻͻ|A_/0;oˋi- ۛ\7g vk~//Rr=CcRra$oـ-.L`92oK z񨮾c#r/5)PjǷo.+cCo771OūUcD ̯:y, bqR$Aw]jXM"jLޖOjXPPv}tprNSo٣Jg%KD7UzvO @Sv㼡8#sJC?` -콊EvY c_59||aj'{/o{+V 3_G H"NYl]0s;Ǧw$ԳŬs֞Dqˌ>$%UVش+!Kg),HX&.Y{V BBt*7W*KI!n L!^&|*)c!H9bFb[Q`Q n

ۖ4f^>џT%PH/%i $E4pPZil-@@Iv'UP:o3V7_&8M&23u&' º7WjzQBu"wj$%!`B &}ff츫 804mK+ny̻y n|ja&dZ@X8pi# W%ׁіҳwSx`gܴ!YE ҃ZR H$1"ʘeH& .LKc4/G Arѡ^F=حνx3(nCd6 WیEVfeSQ_<{Uǣv0D%pۊlLj&8x*G wAjz1k$!p9X]%_.$0%LC^JSn15Jq]R'm>Y9#ZMV78t-A΃_xH:EȒ,\\5vEQpYAdD"Pl=D=rw7Y rZb,2ga'DHAV ?K$@A269MـJ6?uڳ9MY#9wDfc'޲6R*4[ۤfMw+(a:H0U#?oO^%0_ R FhӴ㠭yn:\|q<ߖBa|M{q87k{fM9qfQ:Z 4Q\ۘ9E{GAŨզ[cV]kA#(ZthI3)G'=9=&4#B wäDzXmڔRA4G>#ҍZIӳNJDt )LHH H*(mz x""0fA=c y^OMGc P|_\oqE!XIBi'W Wd"G Ϻ] F.)BeFjJ 򨞌LYL 3ցi,+~(]:IB 98v*pc37 fj A5k+U|RΤ:IL\haY-9k듵ȟ.):9ݳVKyנ|+M?샩x3P[;i* 2m@ zxXB¬aG: X&]Z8rċ+4SANȒǍ8f=zMX2T*`$In5 "aXg7vV%UѸ\C7 sʈI83.BF!:NCR,=wP@Ǖ1[`4e,?u-+)Xo[&rR^-ֵiȓ"FJE=xP߶C(%`X.9w{z(rPR t9'ՎɫVyݨfTXuuQ,c\mzxvިa;?ywK<<'z*(daW0 V[s\^aYp%  ӰW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\}R9CO^Gp59 :WXp% FW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\}RS`"$Aિ\Z{ɂ/Qpr Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b'\^OG'\\r{# ={U=,hGXRi\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł#A\=HWԛO{Eoobnb)o8݁`gׇ=~ ׃̵3f-ϣ{>#և=v;?p fo "j[I +Gpi5 \uk~pխtK^㻎T'}U;dg%> ~~qK.g_=ZM>М_ݦ6]O{[B͵#C|m>9YJ, CI9[dN$KfTI9ل )9f9sZntRjb>.*;Uc7lx_/#+kWe@ibAAo5_n&|ꍄag}JՔ {QFlq1Ǎ\ܒg.^l#aZmYקyOW'A{G2}lU@>?GFZcLQ9T]/ Ob]cUSٻ6#Wq .Q-1H,艹&e[2S3]}Nuu3:IV 9e;fi,NQ2./;6gzs " ]OY6NWo>=:j+W-ֶ>j/IJm$T$DK ) Ȏe5{-ʆCCaX6z,wh<&(" A4ɇ`{H9׎&s-0[P^ +ɲDʕS )tVS=>Q}]RӚNlz4Y4Z? a$G/u [͵e>.j)jz)Gr-:}zm|d^{,ouSv(b; T>@jĽ\k =->-{TjC8L$@LαdEZB V-Q$1 KZoL"ԑ#8e!HG0HL$M0 q͈_ɰ#RcEii0=5m׽u1' :R—;G_ϣ/<a4UΘ\kvm28t}^txQQ[!&S7:7")9$K0xLS%BlR'@ ̂D)0;mxG]DFگ79@!ɠ%>tlu vLF0yi/~(I1g>)] 'pfO2Jb$ /;M8 ¤p&USUoDX^%Kkl)4шJ ST`R1U*f(S$]o@\א NI_>Wvh`J>QR3U8@igTUsVL !Y l.QƼξjϺVP 5ӇC^j 5MRxK]].nr@qCr@]|"Z.ϓZHYi)Kɂ$Ո=X[&CD.Ap eJ9NXӧ=ǒnRo'rSy{9_ɜDK4- \vsu9\[hC!=3xGKR^+Im #"j/ڭrivKF#5I{׌ ұ ZXZg-9;'A|'/N 2%%>y5:y)*e mXF=MhɈh@,YL&KqR\B+e9A4'(D6[ʰ (@%N" 5,SUڀ DHчx)9ȉީMm;[Ȝ8{7c`P8F_ e2k VDmpݱ\wE]yso-ה> Fq|c2Z ..:]NҏI:8UPGpV.>>N|_=h/%21G8YyZWґ}F 9$hfx 7k]1_,t`.]%C5ؽAJR} w~VgN="C UdcQ3UTXQ=A=!{mvVza[9Zմe7NHqTk &A{#ƾړ|xO.s<] g6Ro$MꭧT/vP!kfm)ֆMyyi st%R(l0-\2 I{ %X)3Jh- ^u#3sK0J6T-ss5^b٭:tk7{!gNi.[G'yY>ҼNq( k:b2u``=$re6ܳD)'u% S{!4`8J,$p"jZ3adES >U2|PL"IvHw|49 kF}s%ǧ_ٿt&8>/dh\iš.-k׻'uACNOrere?t*igD#$̂$1 -ƨa9lB3BHUK2j T%>X2\Й Hhe62,bfXM h2K2gfsO':!nRf1W#</ڬ>}26 "N$U!Z!9gdDR, | d*-zt:9AIT-E}a ;1B%RvZܯY_!T~WP>"|_1x/Y:aMCFhy?Ԉ0PQV#,\&x줃>~GϏ|pHR.Iv6*|YA Xh2OBkҠ&'W mI--h\o~ ݔdtvFR?NO{ʲI/cAGom+\M)=OJq128DLYB` ]€Oe-wx^`X182 <j[Y2ALH!1LLWn Kb(ȣ`pEcPRhs3)2P;8ĥG@?O3#A-'qY/QpǠ>٧Q)r2=OI,e!Yy!(`!gW6["Z`ddFt~c- !TLH8Ac R!B;fs2ʈ|0"gft Syᴬk FzeVJA#=@L9fɾ+dQ%< VF=_>)}WM5:Qp(}B=(ٝ\Jn.d%{ntd_9g(yHH$d^ C-I#cH嬑)A`0 o{o;;nWV䔆_k~i]]_~ZntW]|KjOi}\|r* y_ՔQYU `\/7\#~׼CD~ԔmLM׾\iiڏgc؛.>_'V?L 5iZF3o>7oDZQem^]]x2ix\( {t:*xkz0]-%WDnJKyM$. \^nJz^Kn%݃[oVmLXm{~V 6^ju ][f8#8oFÇ8g¯RFЅ.ÛB7#߯|I 뵱NS0O~O"H'G$_otꋒ yWY8:?;ϞX,w.j=LW[W.#].;Ӌ@]'+Wf6*AΨMLy0[,FqOˆE(],DuAXg%`Ƒ=פYe+ߊń"Bg-pUƐ\ L~EMGӖQ ^^ju =R/mAĵ诿gnzг^sz8WE?TP0Ȭ@\&N1Z]\$-Ɏ1UQ[c̞J!m6fAO 6Xebv^DL hɡ%3Tmd&XVVƶXh*c#i.b 2^s̹~M^]ve:'`ϧ'We obeEc`XqmDȐ+@P=IO ! lJm!b .u'9,3ckW%gq jW[ۢv@kp+c2Ly1&12wŐd̢6UC&ːyA&CX Z$a&Ñ0.Y k Xe<&x:(}Ajc[Dt 8 +ުBeErMiY%1*PQ*2&d3"1m|@r2 胅F&ydH4*#b5qGlj^'\-tYmlŮ\\#K3!&ϒBT|CĹpļq{ms6*np)p/xXmul2 잤EWG{wpLFsBя@C6я]2P%x-.Xw9 Chr#]3~:4'txQӞ8&}DOĕ_8iK,8)嚝y|i)%$Saug9pEx_ȾE|w] ]hKЅb9-$ҽ4kz5p}Lc1ݒ]Xņ%Ÿ>lfAwm ,_ns>=-|_E[ LPۋj]Ŷ|_p4LuW_ kCpEkwF \FWEJʀaR\l\;WV]"}"%ER2nŃ]͇kItKyԞ4߯8!9&}m:PKxhi6On$ o(iwۏ[ 9 ɂV E^rwX*ޡͅwfs)*+K֪o.$ab\^?X*ݱfgNHJ+[+Woƾ lK[WW0i\IbJ pʁ+X*j+pUpER9\= \ -d;WE`+wH\UVؾUW"\Q[r NۚOtx^Oq-F6#^ɾxC{:7JJRyN1!^+HqsK=e@`oary@DN5ӝ?!yϙs>,,5%DḤ hڨZ/bf@\I{״?ۨal@m1 n:::_s}Dzb}PΏ l=ZDϋ,Y13R=,qF2ȭdYO`l=_Vx"ZT&M.)R[[#ATUˮԟgiaHfC,ǔeNl!VaW΋&Pʖ$|a!\+_$H#&3?DĬJuw]oCDQؠV] (%t&9 z 'Nbv҅$ >%1SB([ IR%PDhѱX;8R4AM鏦l)D/sk1q2!FɤS!s'ON1"ܣ)dlWV(]-!tJ9O+ߐdmM䅷”6%SdVX)bV$AU~I^ cT"K7c;W:C>\:uidpFcA8CRRjqU>H#~j%E q>/UrI=/I$h[J,[Ԛ H,kl-rIXp?62߳ Mm(V(2Sn|ݲLՐ6.gmlz6UBiK9=*ksv}u1L?A26krf Hs2AKU9%$~$2&~ B842C*~XC?=Ox/p E7 $m7VgoX(;,6v_(CP9{r#fCDŽ6"4r{_e7t*rUztux&#U婔{zQ-QqJ^m;!6m2+(jK-hlM nuuh{lGmFð}4eC-uλ{r|N&[[~F~y`};:^lt"ɑb>x-Ss˦4ۼMa7UJvJ6 -mˏBbLC<-Zt.KZeRQ5(WydDCJ6jD* i3nu{UZfrp=nUn:Υ4ƖovYv aySfBaJ<H8Lt8,CJO1x9UΠׁ[L 37Itk|5Xh]:#l=hR7ije!Ժv 6 rsM~m60܌7EujyF5{:Ċe˖OVQ:ʷtvBŇ@l}ҞRZ+4J?{x.X|?"ͅy .%-1&rF)4(NVrsp@΃)yгf rbIYHS㓊ZGck~;yeqv=r&3crĖ#ak`KL)ц6ٗ6Mcƥviʐ6n=;z%vdt e'o&~}HOXIy_,}_ZV 1M^n7k`D0e isQBC$Y4,,(kpKip58b\)dcq4A!ZҦȼPIyf LX8GTyB4ś̫=M3+(m9Ҷ~5WԜk@]skc@wOrF낒Mtk\Nq>D)!49pyB14tP"dA̳J~麙A1JU:%wI,J̅7Hdi[ 3_!VgL\!-ϧ.7}v5~x/FU>bEߗzGfVxR%5)u"1\y%$v(d =)ɹU+U@:97$u$(9:. BG#U jh%g}xMn;os*_S,7f:b>)l= zdGd՝ɏNVD4 .$e Ql Ic2#-֡pʐ|)%{3]SJz1vS:%ҭ#1 !]yʲv ðȗkʎnk#ogWrzx< yI'wJD0gРVǒbGA2iC I9RA8xNL* mLh]*1-&ƫ*KrT{V MS Aк X 8{ ~lsI?͈=ҏL.Jw{kZNY#{>OM)r2˧xiFR*K9pȈ E, RhPk]N XxШ:3/'G/ yf &Zu@Eg.'H,D9Xa!rf$.YzKofkqaնBZɒ0US :=|j2?앰a+_3m%#tz#֕M`kNkxd Zh {~l1ME}= <,xܩkXQ5uPH7r˭4/ζo-qAE^xX(J%,qtjJB)6QB/RΚX!,9p*\]AM@4O%/~w,ĩ%Jo%N#pdY}h> &hFBT^e!R`.jnRy[?P2!$y cR-iSZvZ^bfm]i^?-<9TKU'me~jг`}|WEJ @HN|!yPeJ\&N0W4#x4\$#تh͈9Ъ@, IE-fBBH#I$ HFjFz\Vb£{@̯ 3`\lO~}._4h`4܌gǬ28:ZN  `U eɈ*̊({@ ! lJcQ(c949.jWݨ%n1fǡ*Qgf<w&,.s̴%Fj$ M Y 18i "-:d k0hD."Qtf]+9*a5qva/, 0 "V"GxKРjJ;}Β.y@F0)s VQj26C LgXEqJr3Ct*dH41 (BT82ѧ):͒qQbflNmp=IsJZ[b,Rr$_ ,}rB"֠ZW̄(Z)q!Jei G#1u2/q#Ib>mv-wm.Rmk-K$Ǚ,߯dI#ٲDɒıEl)eђ6m9[f)Σd$f5oeY,-|ΆM[qzӚ0>HMe%2tQ)tt>`@jS$x@".%1x[Fuje՛3U.!i-y BEI*H єjpq'ѸcePhW0-~*'[i:HqL$B`phPhZE bLxȄNu*E:k?д#) ')AY>Ƞ9'"1"$Adc-DY$ڠ9R"GmIby]@D0J= ,OȃFǕ1r LэZ,dwp+Q9Gp+>??T“$ObTNW<8m2Yț_V[x騒#c?͠C @ᦄag&IBijlu>aˊ]= $%;\wX`RBF6ȟFլW'+A vSӪ+c(=b#+l[1c]!5mޅ|pU_t38:ڞ_G]B/ߚ\nP7Y3E4A}7(ףrtK#_䄬{&&${yҍ0J('\\i9M&p.&Sn/``Fp#G S oA7Z;#E]e /=߹jΨE2`-\k jX 晲EdW?u ^TҴE-(u`~GQ^l$A\Wfтkejr~%}e~[h8_JǙZvt_鶣{YFUw[67z]~q*0Yo\F[X<GR1\Jjmp;6N.[v[ )˰׀&q;k HWJk,IlM42|RO.Y+f{gU<^oN7vR*ƤkCK:rWRrcB#;/W+hcZN(]2Щ@E4&d-wgZy^?`׶|&0yõ@0NI` GABĜpTp%[E| R 3$ !\*'8ÁHT ƹDlˈldVb)df̦*˾0f 'K܃=91!`EK#24ȕ"oMsVv۲\/w3fY#Z;JKq&']NlnVGM~i"*ިL PR T Ė^z[&&!5Nv 6řlFijR|xuq?ƴzŏ ?䷣mvyAHwnm5JVZ)CTQg)po>P%5^S^<\!uHcNWk(o *{(0ިxa1cB^E휅c~wuu0PMn܍o1ÍPKmc}K96g4ם \6IK9\!EJK8s^lx+{;ZPZ+P!?S=Ld@F㪆o] vug»w|y>ûo?.7?`M7o#"kd]I%sV!ow5OS͇lPP_z c׭'2MɩefzpHb'gbm4?نGkB9a"Z2dxVw^]AEo3 ;^IܜO܂O!5j&Oڎ¼0rwWC|s>Se]CUZ#+|DJι/.2OnBwlMmv[6~[u(wCi|%u4q୍c!TG`m{kg{{څp`|r{%`!WBZ_#zPJ2&2:6fWYRfqR&JOf17PU%3QuBI3cRg#$[*uAdJ8]I8ĦP~`5`UqQ8!(',՜-|BT0/\`yV?? Mձz6*? `jb:JxUsvفtI\trk$V*$|: J++l&lUn.=YZN=\V.Nv WH%{DW pqJKX&3)M^d c "{/~_GԈo3<~D{<r1 ~+s/&n9Qx]~/@jh]7ɤTB !(AzsY!\;`r&y'X@erK ?}0[SG3lP˷%\=:jxbrӿÿ%YHkfh)7U^)$s|iѬgIګg']g7NQK/GcR$X&ܫq+F@ b[WVm@ 6$5Z=E uƔBYz0X(F߭w,yϣw^&N9aX!`lH= "H [C*V~SǼts{Y^- qR32ȌLGLJt#GUI$% 5Ƅļ)oW661 R IKc}^-`Ё6ߘOnc?i4#,DFg1 DEM$2%߸vc;Mgq?vϡf)ֽYKw?JR0iW)FT"c@L{6$`IbsH$6!c6y]<-b}bCk;8 *Ah;z`J)>cJ\qCBv@*CjP:%-v3myO3藕ve.q)y RB L,9bTb %F Q4Gr.""BU1.0tx9 ̛04rb Ӗ8 HEs Dm &veZ)7ѿ᷃DΆy9?w/DyW_xӗimh-[c#㨱IdHHqAQ&&3,xF*by$NױCOyZ(S,%4uKDP[bhtN3Mc,%PNV_4A],p\r8{')1(Z8: O ov,E d:.qbs κ -6 sz(j A#[KmI_>}a\;pZ R:b!Pd UbLg񛓆 'p&Jx_7Uz2GG9Lދ"HMi؜=}v޵6r2ؿ;]9'F|M GYIʲ|Ov3Cz;Z@4{z~U,t1>`:\d|}tU^Ow4_ 俣'R7ɪ &@ a2,#2TSӵ(m76 ~z!lL\de6ό<~7b:]Us~'?c$ IEsMI͢0w'JR4_5@[Lj͏#9GuCPkWmt0/J-k1R1߹Q^ 8'ޢ&'kUf ./nwģzE!aA3FV>#11+أxp qvUf7qV4bm{G(;vpt Б|ӃKSn4LfCф,]ͅ]X!J}痛A`P#ԦiSϗ"Q_=~w;'ɛŶM'46eE`[Ml,+U{ '0,g3rTԫ;{^[5 [/cwKmM}=wr=3̽zޱh@ms7u'Okx)ѠQ{k0y/0N 1ocӖL̴"Cw{@ssɴS֕dLIޑ]#ͷ48yL[D+2B D'grB2΢}r(N+# \e$.({98G?BGS?s|h0zǬUbNlJјu"/Fe_D\ysL^)0z Ɗ;<ڝPAھ^x'J9->K\g6- @W@Z@S.Yj>*K謴q{x4-cKKfGoŎH($̈́BCJq8q'NXQDuψd«bP} {KPV'S.yg9g[l:LʻWdc'+嵽m>IKSHT Gヰ"1 ;?BH2%͟"8hIt3FPeZJך)5.& ZmJS$è]ZNvRőXj<&Jw +96v[ScAO礽>\T*?|P!q 2*NQ_q DTd"rÓeƞpBeɕJx%f6SpN8MBҦtՍRyXb(."QǮRڦڽ{AP#hThx.)hI6IsU:I]FR2jT8h5HP GQxd2&cxŦS"\y. 8w-6B-ʹ"z66jIubrDd3`@aNES<ۡcV:ZZQl.9K{T) .3R$bz1"|4X =6*b"AdTS9oV]EXB\oRoZ~F{,c騚a-ޠ]'Um\Fw׷m}7oF[w*+Mg2%FWGobmlyi&x۶ذ{7/j~~QW?K1K~~9[zMW?|fTa _!we\vpYP6|l+Yد1K^&۫: /4~`7? ExovMWm ]Ų|5w;w'l.|uyWmgݝ5[Oz2 ۪(3LwBWa5xBjn /('Uǫ9!;vwRdԵ6 P]E|. xdj""[{mPp nƹVz'pE%9PBKk J"B8K{E锨ha U\#,nϳ mͽ}OE~?±~md`4zv˶|L2HꭼӍW_& hx@}8^{\(a?;eN~Xj2xR'zb,z*5Bt!0sI1x0I<"ĴwH)>~]J_~){x5AT!H!8PzH>؀h<("(m}PL`H'jdi:fAk%c9(#T&%9$踲Q|.@p$Nq>f;/`RڋGX.2B\{prF'$8S$QGE}_к2u},ʺ>b&wtۮdr:ToxTTP`R)g=4G'Yz(KU*|.SUJ՚qHC/}]/`uQ;-rl[hirlN_kaf :xυrT% hR<&< yMwC tYpr>r :1F=>7aɄZQ܄}@F-/$n)4E:h|RE^6sD4^DR%)usEY1H@]3MT,1.bJYԭN\&2F"NpolgpT&FQ)ZK#m4=&`SX4d8ʷd[̉zjA~ǏA/Nըkt]>(X̣Q$*"1A˧ ޫKGb<[1Dxaq^m P݉V< A'mp"+dFS#!#2sჭS!($ A/<{f]aǐu4r=H)%+VQoUJ &<`dB&zގBOkkz#) #')AY>Ƞ湨EĈEZeuh\fbHei$)&) i:"'QyHRQ1&mˤ|4O:,dv'pQUٝyiv F邯) S=\mIѷtvr{5'Cb#]XC-n' ͨ_Unp@{.Z_Nf ,.ݜ'6=xo ոzGҞWw ?`B2oQzѣ caWףL̾ U_LewEmޅQ?+3\YSptav3)!)r/tV|*ܰgg,ݏ 20 N?'o^^A/'|D_o{'ԠnQB9a'WB+)'n:K?=p<{Fp%G SX}@ k[\Axۑ,%`4knP\CMJ+lM=ob#p)Fyw?Yz0iO1Zu@ dL2Ez.;wCS-CtWHa?_Q$l#749]7=_]djYޑ2y߽ޡ}=TYmǭIH}+Uu߮&n\qs )a4sA_یCfb>rå֖Q+n+6u=t0.9|m0,÷4Č3b΀tJ)Q$#pR`tˇ1uzj8KqP  MhY>@"5YU[Se\BBGFRH֋,\}}򱘔[Ȯ/~'/\fkZϤia,i5R/. in?E]"LpE%uvB0Q{jӑxz0gT76 =:l($U61&@(-ep;bI.wm$IUzw?6Kb&$ewЏ/O!ET@TddVFqeNVgd\*LDO1%Dny9@F"+_[OM*QǸ `] dpĠxjX5v,r(PyY*ǭ8z.&L>? K373(lD8`ux𪵏PZ-Z֒w92ZKZ_ay-O9 Y݋#Bh\jpRB"JI(|ᑼC(W}xd|*0R&9e"hO !r3Y)& V \k>; HM:SASV*nPj ǖKg (x`+=BcHA 5!J9(M\ c:dT;>`~$`?1pD%H$ZhbP92C{S}cz@h,}QpjʙE4sSQX!'uLҜe3 =ke_M"\`7?sk k__f(S/9ڼGP )^xtv({vuH#h^8kS!TY IEjzcιN @h$Iur1PK!Yu)IqкH4.sJg<IR:4AUI !=#:j7mG OmWiAj⚆ ;u^2/_?UWaa'.덓uC}SbfďUcoãe -& @HP=<[Kt^Mzh 8""oI0QRSb0DQ$b &lg:FQ 1m ]BD Dy JcmN א. o`)D!c^* 5:zY| jO#5Gw]^>oyEp'o.KtPZp{Ldt9/].糳̯6j5#{7\nX|!N.8S@mDo$p X]5:яO|Q"qy|7$Ŭ(pMB>J2ncޖO1;-gp[ 7o!2($%^MG*qFEw:(}0ʑ/4rk.=V6UUw+bLUsG6~o,_;Jԭ (erUa풬)VY Io V3yL)/2)df?LAq?V{DP`[ϭeq:;ܑ|fI[زVksM㻉Wn (oܿfz7cnQ!7zߨA*ͳZ{:]7F`4W_5mvPS^wmMĮӶ6NxRT0I :kofWxw-V܍`M(CĿH<230ٜ6WPH[y^dmKJw$%$Q Dp,,DR!0 [.Re:%@73I6SpS:: ӹ4IaF'dY dҮ?&UMcReTLrUD1#Z;O\(EJb.XED=hk4~aPDǁJLyb묢ͷtu4ד۪_բ:E_)ֹw1Ii H#W'+o-gBM'%?×5?̰y>?ksȭ?seBω sVw!R*^tZAK+rBpVKQxchG>5:k)C5E9[H_yg/y#fU^vf+o涚3K !B'|βq-/R\#b-d0ESB #X!XjdAa6tȕT)c98|°| .Fc}qF%Q!Jۍ8ybv:Ztz >r@MSByX3 n< WSkXN^vvs+E^>F8/qVX^ U3+paI$ AKAN9)p {bprG!" :$A!x1Ǖ9m(gƚ|_<2jIPM>O}/p22mЍU&ymF~a!5/YIaR䅱k'0/ܑC+x7 dTV3!17`! ARMRIs9̝&L0ǃR/285. [<L%ʬb X#5e-#i<;sl]*^?3Ýl:~=rwJvc. \\,le'|LqdmA5sETQ"Q2q k' Y.x`49kih7͑29h)UhGUb2gj͘ UNs;wKNqNJb|0mS,deD_I˃= xvdoGvώy>98 *)qQh;ȔR)85(.cJmNW7(hJv1rS|F`yj߂~[$srz> gGOSr4Jgچ@hQIJh- Q4k Gy-W˸W bxI: C*xS#GǠM"u+2Cy 6h͙DLI`(wjG@}YA; R]p]C笾MEILn7Rw`OsTSxƟt6 _I> PcpH`pLx*M20;FJ@P3-.B>jUOy4%Q*P^z.WYbRRH׉fh`,%U͓:Q^7B{tKq=xRoOR*-Wk''7) '1GKAAxzўw/~ТFf6CCw1u/PԶe7tw3*ۜɖRJ< p6 |QiCSjj.A^ǿ&\r= ή8;97^Z_x8zǷ_`vu?0U[%ooJKC> =q6n=ugU𧿔 VOsɣ5UV%x>ų0&7\u,+JLoQ>l|7bU,_ &iOIܬsN:̯|ld."Otb ^*9%RpM|Cu7TI&`25ip=Ȼc_f.*7|UrVp<TK;YӧRjC <{keƪՀ]rXPL4Sqڤwy7X +13;)ڜhM1_L_WeaykWX{"k(eB֒BK 9E)8,X U`$6n _=.;VF]lSmy"^X}5xy^uǍZ\ζ Jɜf+ֹڔIk2 "$9hߤ{3=EOX﹇S(Û[QD>xPAV]}4Vvr7]NmYUv7N=Nl6IU^ٲP~D~X^'DJ^7 /870 ˼emF+BJ5mF5ԃ5KkDw[`6rݷaPldK%թS;2ɶpq||I3ZdCs7KUr<\̏SPT{8NSf49HW `ohfjxdxXxULfVw/u.;aLx+YΗKtt::,I{{uhк??Pg7&ֽgjx*0'X](]px.u wF]moIn+za X\ ;Xp@ƁUŒ#ftZmVKX-m{HCL6N&{%/QFDo쿌d~qb9mQt!5*e1{_OYXQ"&6q2zM^T-=XvVq/+y}ω/!.zôʆ1)D:$:/|;pt y:Zy2ȩ7y$kٳ0g}ox?by!7 %$YjELb2X&oGqBRe#JaS{2)@J: z(A)5%z-HsK=\d%bljuQy<&{D!˕bY/v@Z 2:"XKE -ROFZ*I6JP5ld)$AE{Fs<-Z iE>y/^Vዛ՚ɮA5IPm{>nJVw !,9X?z'+Zl||4O()P2jxEFeL0.$II`OfZF]ɹD*\Tlz(%D/j,lZ[fy2*la3ƶLpUE[ve}:Mle7tNd-v *:a$ћ51(19'VBZWX|P4j@mu~ ìl$T5ںd!ZN\v20mIֽZ i9L:vھ՞$^fJ u$8YXsʓCIU R+ȧmǥ0Eo$P{Ic[I$Hd80bj΢鶱=l6aԯªaEl&Z""NM%ӏ#җ9;k @ME(&c&'lS:`ߘ huRg(ڙc@t|JI3B$$4͆E~O<Ցb]2Xg3).*.7.n,d%eUVS K;T xх$%%ILvqvq,tjec{(w{0aOpud4Mlօ}Щ$)Y5>q_!I`%NH },6yI{K ܈Sw~RΙMd5LjEk+!EJ1~EͭXU6f4ƤW9DR)58~q "z];}>ł{Lsj\F? e}VGYN4)RdJ*LdxVDDMa%&˓yd ͚dAc2ɂ5'10 h%ātFr0y2'EjHy"#XIVՍw0> PRFbX^(V)$ܒHy^h|qoBUp Sl9KrOhܻ_{zقJ89 mdqUrsYe/o.C|7բ˗#ӎߜY Wl]ijŒtձ B`QRj l^z=P7"O?V7> im&OXI*U/_(3ya]=:/Uҟ>i%3{zhg¾'8S̞x|_w9kUU`e`Q!Ff#v%^BkȌ` ZBQs1ferԵ*Z@z_ui32HUw#&TOd`Yzy~仾j.|UUR7/:YO:.e ߟ/{uFioE`AK֋yd.ZhκT_ g3~o%b?[H[G>N+n?w;9k~nsxAm>?/G#?'j}t6dKwW`|㊞j [* qvs~ujc|9[O^z=&w`ddc5YnXS[gRHP:YK4XW`+st|Ț}y.U=7t_Q^^/voaOdeօx%Kwcn7t 8c`QBHBAZKPQ$2ct4HـN6ǐr%y[@$#^$  XCVsTD2)>Neom޶WP¥_.GWvg>^l'JH#F)'a?7 ?k%m . Ye}RNj?Qbiǝ~)G~)Q_s :c"H l7<du=p YzH~YlQi1 &$E@.,3c]Q4ejm$<t>O4,#/-ǗlJj$fwĀW_ :OOV~-7"&"6!vE'0Zv)k RȩGi,q#h, 'J]* F ZwFduv9N EN;m -]s!B)KQ2į3 ̸֨qg<;וg'f=T7ޜT=.JFLQ6)GAISOQa bNkF-: A&M+HTf,&i"IMmglL FWOё<)e7&hI‹tU0_#) +%(XLy y}QWSqꘒA ͼOAVM8eR$jNcm#GEȧ]ŗ⛁`paf%F,)bŒ-ɲҊ ĖdUb!OY1cxߏmko7!d+9=dPw*hDk%Ƶc'y2E#-rXI D`5s\VRD&[[i(*S=|7뼧Y3\`p`xMNmt{?uǣ8t;ž29ȚF𮞊˧ˑͣSIm*@K8̺žKr~clyO+ xљv˔1QNDŽ=Sj1Nͭz; pg,'\I0{k~# p!h&LZO;cٓ3pJvF [L%s%t$yػ 5d_vz?J3[=No`JcVFUz^a8^lH}]B1|؜۪Gփ9)k-VreZ\C >wY[6`ktBnm\xj꺋jKfsӥKֿb. ͊Sw/4:fsw6f ]YѕT[ovS>khy0 67vsI;o䐷zt[_hxʮP?y Ks՜mwk,M,ȭ0aB ,WM EZpǾ0PԮ]x F)P*cW`U/;,5Oސd9SJhO3M.Mԛ4UqQ1\s)6wD=%7]3'I\ͩ"-cwEJ[7ݴZZ}BO]qɜ]5Uҵ&?2[=!~UwWӇ׸úWI˙=z`]W+Ӻ]֝ ؓqWE\mO]i:vwERJ[wݕ͕>!wE w:=sHCuWo]9RevQ7v~4.jI_+~ֵ .xv#Y K/wT6 Rd+LΡFܝӕgos¾q$rnj+wܟ6)b)z:D,c6M* LjQg,j0P.'bNa+#E\׳M-.|x_F~|`0o's:Я RZ+c]9_5f*+bew,7}?oUOr0y:[USsBT#ׇ* qBa leʖ8iw=k_쯶]F)F+ 倖>@3yB`v" 0VtMlHTArȤ5HAP0KnCBe"Bg^ hmb:2s iL-Y 9p>vV@Pi R ATHAJ4F}H~x,Dı}=OepMx Ȋ-Tm^Y %A 8 xoG$TY2ٿ.Ӎl~`r  B@8- uswt|zO,zNyZźXgMβc]:谯FDV6,F WFkt ;2 ɦs (bתp\4Ҭ4>U=͝}/^>`7(@|~̘_hNHX!gɵ{~26o/5QM}"|rPr}έy_ٕ^9imӳN_o_#fZ툏!\Ɖ ng6;P[uٝ{Rʲ{V}QcrR]v;R9aotiVi /8Apc}2 *W T\vyabqCz.q ulGc{:l'Y@zeY̭ʉ#)2"0TBF K }PQ$)!\wi2j\'gUV)DPڂ &,U`,k3OA'9YqF23&d!08zciT"f̾4dS@B@f1)3b#8%V4 d+CNr&SߨdC`"'@/RNGrdz,o^DqR& fY@+P9FEE0)-*bdNFL(,I0q uޒ uB!(99zclEl""5AGq\ G^nzmfXZiҙ7RuebTLI>EAH5F!茘9$f_:bz'`.K01J&CTtpn"Г!XQZ{8DȒ+JI #,aɠyԤ@\ D2 &ѪAԿK Id,IY#C`Qb!f;2(n$de6Ԭ!$EH q;YsY,L@pLy *I کj&)\Rʭ_m,P͝q&GG4 6?jd{__K1mRTC5k&$fGûd',tҥ˿ @~OO5;7ځ6ܺ?u|?unyw;NE4klg񤙳Xn}U/ D!?u֫fUۜ=lM]Vϧ-fbIRGqѹdbɽ\]gtGH<ͶrQq?㡏s·?դSDwҵu_ퟗ}}w_n]p^%[S)yqiθ%WnLp%gCvvi\3?m=h=Þ9}d G4E+If::~%7u)Z*E(+qyעƞoR޸hM{~ /ܙ-pK 9I^LE 3ĞUI:>_Ptژ~УSPTWlvZG4mvw+Eі]Qiw~ygn32=nf~S[ z6mwせX+\s<#o\!{H1%Jt>L#8C}o4 lz{ȄNШ!& - 4[T^C3,rɄBhn$|T4n>yN'/=]m-aj (*"x٧.ѓ l59Da ^ JJ*NWΧ\2>SL`8wYw<{nu ȃZK-B9d,C٤K\&"lur*pC6PQGrDT|m#EZtVJnЭ&**&v\8VtK ,a 6?{W6=F/vfm̌~ۆшȣJrI-HR]JVQU͇d*LfD~q3.8g 6 \mnjQEIu<" xc7N^ȕK)xQm/3I N)_:]&sRG(10g@Z=z#g\F.ׇٲ6Đnۻ}1q>e>]e?uX>]XAd#y6iM3g«ڳ2ZHjqu|#kBX)LUzU-4{<+4cבo$:\~)LLK%S =$+9-%ntocPpbӖ3#9J-۫CT/哈I73 R5:1z?2c<@ft,IIEҗt3=3Ho|) cE>FE]2WqstY"Tb:Dg#rdc0 )՘$ L3(@rJVF0Lr5p)<:Iy#Qh^Ec|\H v3A] u-0L9+}X6r̿Zj2ͬ\T:`bn!ˇ,C! |L(y.XVxsN:)&H>ѱvY !uiGLRdYkْhQdR:8rO,` e2]7ru$[zHdqU0?Fp`vҶ#|6i}~KgGl9?:og&]^.ȕ ݛ??ZKt7Ԣ"X6UF$QV2))إ?/-r68?kDՋu5BBAiS"ه jGqh;6X$_E,w)@G^-M7jTum4j;<^P뻓y}~\\,07m˦TZSԤ_c Ӓ2n\.RKhˋI .~|szz4UN&XYxLVhˣe,X[$h\9l Gh.%m`Yi(Ut|bl 8ϗ2[HͧtvYrT9ngܟOTkWQ\^O@ Q0\"L6k0m]B?>ݢQTl.o ^8[vY8׆XtXT55ƩP/5HzOzݧTN33X5' DVi@,dLTDG"kٝt/ T{9 }HcWM.jsXxmJ!+@jqUp_L+PP̯v۶]l}+oc;H㨫pdC)Y{N};7;I[Ĕ:C잺]L鿐ӢYF_FWr^,t3Sܽ4U[vޛzÓ=f1r~Xf;[syE ]LܶZ{>]wzGfk_=t(P<:pȽ[QݴÌq hI &]7?SrX"eUiUUJRgA29gJsO\?É8"94au`%P&ڐ}ҀE)J n dͨ>,0+@—ɴb҇$H)CS)1@ "|F8b@ :d1q+u:5zַ͔ Ի͝f&$: e:ԸM( [ZøLȜp6d/R#@Pv)@X;6_ J?39{#nG§ܵ.^ۛ ;ieES;ܛCjmw{8gBGR U> UT_ QTJÊNQL!-U=D!2HrlB(10 m>*) ]+  Dm>f~W  Q+|jI;sG>{b L(h!8Bg>J#N6ݹF` R*8ЕrU.'Y9R^z> 1cs adlF:oEJ Ĩe9l834~Ci(1JU:%w,I\*a2P"Hrbo\ j~5Ep'SK׮_amnBru֌P Q4]%5)u"1 ɹ.K&$QȄ1xRsH{@F7ps!x<{dK-K ppkH ʁA߁}0oY4טa+hNѤufLQBt;k?CDfRg(-N&Xy#G=rxz_uq?:58gEVaRĎ1&G4҂!lSb%X3]GHc4$Эk8bG=v@OYۭA?Lk| -RNx:kqxǧS9#OZ129N YLQ*!2H`aGe/x `xXT6)ǣ&Oz(ӆ%)ByR.q 6&{ ܪ,M"ṏJ90r\+gRdPBAG\*FzEoN OiIH>QjvZ>[-;h)*xOs* ^i|HOHJWT< E 9T).Te!/##i4 ~l:#=$̘Rsƃ P zrڊ0 ,3&qFӋ)py7N&8NYr|\QfU12*$\s/J*x r~oë^0$6}G"PԷf7Ň \ޟȞRB0p;t|q Ucl 8i/"lh)9Cxm⡻=Ef֎­Җlg'?Z-r!ݙ-#p  3l{ƽ|y%c~/4"}.aS'd^Hm=#mK1 >ǛwwxnP֏VUOzj6בuuV{c"C'jATH{NTbܚ]{BGmj1/7bywo k#w}{K{=5tn(S{.n@= zs}!>T'^Bu+6u}Oty?׮'_;cx4{w2l ϼ")s"+`Ȃ6G(5`pbH_,b~{C=zpӋu*fTXyKat."B=+= A 8EiL#_Ŧ-S7};^~9)ˊ* /;RIoO3)S#&S_Z&XP'/7٢7X_4ː/%'X|i)tolq8-?:ogĆ"=\'p}߽󣨵u[g⋚2$QV[&]Y}d"gDuzWMrTZP'҅~J$1A- 6mt/"o凵mZ6^MZL5И4Z{m/O?d^ߦ߬.FLͤT\.$؛ػ=NPn\.R'ˋI ~|qm}%m`׬ϲ>|=Tx@\+%T*i0=^Tj B ƒP@1J#)1br&S:܃!&r(TWNʞc6%H\qH(.1@\ie2$jXg7ro)dl~B|4BKM,gq`c0CH* I9mރy:;Iּ+&Ė)nPGN_<̨LEEqBUp~5ZoPQbϦ]|BhOjoxzg̚8+[CL_ݛwJm3Mv` A /+ Z *tEKu53˳E M uBK#5͹Z*T"C*`]JkʊU G`C=u=79o7!}!boq(mDx#\ 5YYj%K9DҒ0]"L^h %DicJ Nm4%(5r&/ˆJUj^4D(:2xmtVoڬm-͠/tZr~(TJ%`h, Ӻ͐Ԓy֬b#L"-a,2n|`3d=s RKι־T&^]T9uƦ^v,<˸1D䥱Vfy"ȭ-1%/ QFVTrU]~Z6 vJo﫻 no7Ӳz^%9L 4%+x[Y5q$I6Tr9>ӯmva6 PCW}gOwо¼Ozqi1.J9T+&[jO @7TR R)mZ`ο\MC9j4$7-rgc@_~ٽn\9mٻe %̮ 璟|O^,%&G;2,%2Kz"ۇ<]oVӦqd|i-396śqT뾪$&Yu8^*SiYqv\ax9Vۂl6?mtR#s!BH+?w o}ǔKSksu[m^M:y[WvO6('~,? LingJ#_ӫ3+<U:śk,(˂Gc.MogNFLS^:Ҝ*5!gEq95ќ}4e] T2pepe"5= (n6`(BJ+Tidq 5+,U0B+TkJAdHBZ+kD(BV+jD@0g6\\w\JE"F+EW\`˃ȅP \ZT;WcĕӐp W(W+TT;ѻR=sϓj?n?2Ak`y)jT }F5v[=x Ч+fQϔ-U },GA}TWDz(mB(X6Auؐן2G&wdDv6>.ܱ+zbS0z264Eʆ+a6|rN.3>BC# XJ؃bSߴ#VC鯎[<;jFGrŤ hx7"{Vy U@p{p]`ʕ6\ZC|Wcĕr@0 :\Z}lsr\W7Shһ\RL YS{{D !\ n~ aaZg@şRN(j$<+5b;3 ũg ZO.-{<) AC'/e.ia(J-ۇeh;+wZ?8">YK3'ZT)5"US]&DL(gʆ4WE8 7FHbkRM}ZLcZ!d- W(W+T'Ux5г%LOɄHH?g:î3;\ RipEIw\:Tv`?@0z0(WPpj@;l%b┫pЉ3* P% Trb#F+9e!EW Q @n@ATB6Lj+N5ng0rm0APk J"F+i(1" \`*e0BZ+P˸AT)`pRF2Rt-W Fۡ q*eqe#Ҟ9! q %}G\m$ڵqQ0|~Y>4rq]_6xJ`bcQF5% g`&Q@82#;zChκJ+PWR+q5B\Y b~\ @ydGZC]=\юJblp\ |M8rRTr&hաgD(ddnrQ Ԟ<Ô W  P. ;PW#ĕpU$S P U6wq$T<`X\`.M0B:  U]W`%$\`Ntrѡ ՞0B_' 4p۩t}唄+T˽_D2G+E1 Qx@ @EHIfq+hz`m&]Va^/L6x}8 \Vx|t槦5*ni[|.䱔И6"41?LXd$uflH0S e/ܠJ,'p L(5o:FD\=\g$=u; j3,2>En*gbbWreWk"D0BZ+PKW J4 W wȇ]\LtjO=5p<@%c1#4D:\\kC5CkR҈ ,OlHrkX(Lz+T[㈫'T5w'B+Pk WjRR+=M W;Pe(q=yj^qg0rWx2*u늧J)=JSj9;gt"81Jpm@^n! A":L\C4UG91b*-=MV `pj~U qlp;O&\ICٹɕ tkvV/pp#5=Bx"$)pr9 WV*q*5!6O{ 㱸BV++NVx+Tyq +`pr W ll'_W Xk P5 .|6jFc2\\MCw\Ic#+,) W(7vT;T.t7&NWZZ+d0J f׉JxyFD%A&*qpYE5%*q+֡pjBNgdhN8YVLIA&*qeLk_RAPuĹk|}.r[ɐŠ`.>t j!O-8:ʩcj&O g&v JgBDGӋ !TŸflNrxZ;T>)SϮW%=H$xkJ PBF\WX-h@`prWV q*#R, ,\\cP#}T~*15o]tB{YOqXe*˦]w/gJlR)ߴV> Jhٜi;qys'\-2fg+ Jm|tRn({9lK$n6|nrc4_ׯX6x }ĶWj[:r Xެ_7)8|JOByrv|;Mf6QCښqnm9I.7v©TΒlUᕛZ|(Fd{bry{].W/0NޗaQE tzuDW!xמW0)}3K5=g[(憲#jC)wmmLe EU~Hlm*U:هjnPLRr\Ԉ%@$@A-qAkt mN'-n6sF)TwB %ŁJ,I"(@x%~t(̀ҟeAM[qz֌ۗكzTXhlvɑ5ȱN!ؾqR3)}"9=\\fz{9U zO|1-6wIM" .8n60\4I`!z"<2v4@wgV?{ kI9z )bJ & ֘rKOa&_PX/*~~8> !v pAO} T*P;Ts!("sӳ\jy˓qI@G0$ӵVRηra_ y6\$; &8ѤĒ * F]m%{'Y%o]㓯uw3R?5@MfTT02h>fGVch墙ڟ5!O RݝPi7ŏߟ|x0{4?z߽1U}z'/ɛb^' 5(jT#G:\&H<)x#Wkh2|V5EYqZs^5!% Ka(K\ZjJ hJJQtSd2CNKMƢG5@IFk>V>B3>wqIqX]BEsK`os7GF.JH |/='= / 䠯BS*$OeC\Ȇ䒝|I/ GMHGcb51ŔРCF/*Va/8L9MdJF}(Kt⋥-{1nV;Ld*uՔ%+ 7  FVjPYp0QBռ$ԣ'&X1(v?bJҏ=N?GyzZͮ7E *d0Y0/_h5z6-2ty^v UE80/ ׃Z7s=pʎqx:Qx 0:K.Q4v4`2]6Sɴ.%AbyP%^BRnd`sQ4dS0 K O 9ȸO aHFMMIЁX(CNB m]K9q`'aiA{J?=A&i;3;^Q6ѐV/dJv)%b(qV?  e$ObqI&\"N"Үq\ e8k<YJFYtM2#*>A,K?>d8h,$D T~Ge*#/8Jg=Ytj+199$FY;xm|VIP1<R>S.N |4-Hr@f8&a&^p Fb ȁ\5;\9_kxyO}Ss5+/އ-^Tݟ\?hlj;&(wԻ;MHPqЉLJ d|բYmٓ@m{._LEPcCCsB3΍5zʎBeJ9_dbYPo~d^Pmҙold[̵^inNew u6Grrd'l-#lcnP+i!䣛1˚b’Fp90Nn0qiGCDz}kTerU&cm& %{waH$۾|QŁvLh@/dN .yPQk?E ;(vJ6{S$Yc" kN>Fh?>5H,N2%i!8J2K3$E*+9d˜&Q^q4SH'G,~xRIO^$S3ꘙH({ BYܾ0c.Z+AEx驌'ޔ :T% /q+wXY+'\,h.5XbGdT^姧L%!5t`HpV,fH hҖ4 xR "^ k2 L|ҭ`=Ŕ9L3⬤_WG7ds<$x_bN:WFtFlii^XsP6 fZQ ˹wuѥֺd`{Ay #&MH 6󂓳T㒂ȂP-nLYq.eh{;~0RZz6|y8&tbã^͉"/h<=g@.@ Wbz<Ύ}iG>>FEZ)9 K]V}hy0OG.\c50o|p߶^ɷ/kj]_ۦ#U).j*t]q@B@ .4*1$jրьJA;pzR7NC ؝Q*@cLM[ Bs݌K2ua_yG7ceO7Czoyaue`:]^̉me=Z_4~rv#֝'MLgc "TZߜ(zMES",-Ĺ2Pm,3v PD>o 4i c9_5#K'EtrA'?5Dc<"+~N"\?9e%xLPO(JG$ѐq/*wTm,cБ%:,&X1F#^ Ӵg4^yK\LE6SUc>K( /Wt`5xeٹME3 NSM鎬RъRm[P1K]#3Rnuy0H$PlyYbѰ/d)K Pf*v0nKDeCu#h\lħwTAvD|YTuGX:vuu˷r؝ 0o氿k&]麬(&DJh@r-Y]IRKTjxJ^a Ճ#z=o3rka^f|36zY[cY?T6OGżW_NӢk'-2ޖfzmfY0im|e30sR;X=qBN, yffy i>}vq #/FǘdBOLlХF#үz)BwFlߙ,F78ѝ^fD2)as1zɻL DBKYee@j|TƢz6{]zxxpǜV."+$(r?a(fݘ:"q d&RP͢RI?aXrAD e%,Zu`'Tb:>Z嫽Dp#_e(E~\-9r ޴C ѥ{h҉= }} k\tį +L?Mjdw-B\Hk"S@gD5Yy fW0`MaT}c_$Vz@@r%XtP8"wBhv&<#L?Yqko criE1.ן`Zꀹk&a{"yp?FvǔFêYWS5[?l=5S3[Ou@5NrDPMYՅm0uDde  ^TxLFwVџ"Y6n/: o2b ܾӮݼp~ _Q;Xk;֎=5N8StON LA |MgZؕ^ Y7,".uj lI$E$C!1`HQvmR@k nPD7/hib,k mHeX=uH܉#w򣹃3F[?7;Ǝnvt;pgNXm5W-Fג1] ZႦ5t% T6;r'ӹ׹Ymz.Ȓ׵V%)+l Ԙ`$pR8:!\!R3-XUyY**]xq[+ѲDMVK۪ SA(X (2u+Zqs;wUMFtdS|-(ǁ(~>-H Ӟ'+)L9s_\[-gܝc/d jr(+OӨ, a핕 :9 A9wMZmV#qM:nxBj4䇑!בRmJTe΃c׭5_df(Q*?TYH9:HY)K&7tP&|\(dvx4rur=r$0RQbf-$n|T05R"6InCP;lf4 ofp}XMK; `A9Q\U '}| `J2y]%zRCt`CyYMz Ѫ{إ,X=Aa2:>||.BAV.w=Xk+<}|IGT\ 0 Zj(sN[6SaYJ>/7װ2 hƖ%[vXpVE=A-TǢz-YP]K#̠{m:%Ѩ.@ ,ߴy&}_e}0GDCiujkGPAYxcz\y/x]F'"<0<>}l21lod= zx)CGс`Y#CD4xZȴPfa|iV ڏ3=XKМ.ˢJ2XЈfQ”(%y%oᡟZfDVaoBޯ z- xK@;ws!s(6^|6A(el^ ,Qax%H=VH6oǵ7D_y3sZ% x*҃l,S@s<,H8;V`9QpwMk(AEp|q9.(cRyqH: !˯XET̀!P!T]?E4(N9 lWRg`OVaIY6ZdҜA #0^H hІ_؁Lz FUg6Kϴů\Lh3L-M+*ym;kنŒPا|[aOK>M6sRՠ:Eh>1yJJyRqLx# /2:XsJkZV *mZЏ]̲n-tDK3`#k2d҇͢V΍2ETW'l>T;E O(pчI9>z%Uc0& ~tW|A.ag?-ƿbw&q+&/m>:6 IӡQ#1{")8RP"d\=[S"㭦)slJXyiEIaVuyR8@|@8ڒ*{.,xOBA"g$~xQZ$@f rw"3Fgz<|6ۡ?>\MªcmpO3=ާIJc1Ut=e1ڸ*+ɋ~fg0u3(]e_=WPaYc3Z ^:1W @0pPcr|R4^`<ߠn!=wCPBH'\{QH qމ)`ƙ(~Ȭ93˞ѧDcigrSbYK$h.R/5#et~xwk 9iZOpɑ'J)vK0K$1ǗN|v#7vq!Gp8ŽzDr#/Rip̗o]Z@~"(q;! OV,^2(X{P7}/:+_/ \.fY˗t*+~Wm,Jڒv%>))K zfMR&w߯XUz}PHl~@j VGӛ{kBNR)%zC c%f/YY`e면)G DܜEƿɛ,R kWnѩR^9%%Ӊ Yr`}V]6Ѻg^Y:YX- zOn<"۵w Nv {)"2ևif/FMD#R(gHu C5J9Aq@A`(vQv nVD1-5(-O9krl'O( ƅ2+PZkIg9@ F)S{BQ YU؍R-59@I>&;@ zk?D_䁀y熊Wё7Qo(hAAd,J yZ-n뇨3\}{EԀqݿǬut:# D؀#)qr +n rbUPMZ78%utYHY. Vi!2PIq(/5K|p7qf~J@m]d)p2G,^l1kCN-v}AewAFؐs(\rHBYVPɼ V3C)M{!uRtt|~7k)S~u{5|%EzB'F 0Y~ z{7L߾ Ӗ%Mp a$3?vLNN#j?U~ǻDt Lx d$?݃exI'+R{5U`D|C$s W;fYMP6'v涅aʼ{nNv-S&{-12}s!ԾmD{#%G,?Lܶ/V=e:#˕@D§aptR *.`V.:<KZFCVPY;ʬ4άŘM=F[;> &g.͚FjDZ38TmXi+O*DؾFZ>{c'@$ܵp0̘fHJ?fjg 5M46=U޼sV8ܴpKrw4cFሥ?Pð#M@&ˊ<=N_|(es FOQ}8|͓e2[JX"MФ~sb:I&Y8"~A ޸8ѯm'++%}TVŪJ '}KK!:wa\IӃb_SWd[k%#2祥e}]Vp~*.nXGt֢w c .{ITZMlꃇlIxCKBEp c .ɓ w &)H6Τm|3|VU+GTY:fhcwRW?) KbdHӷ ,! 8;qeX_Nbf1[ &sg? l`ܚ28_ )ЮB9G#d_W?DP(zs4FUfRјuKP )lC 7!j8$е"~*c=Z:J3ed71h+]"R^ׯQV{,"+E?fg#?_owed{sW΋{?H6k/n)N'yLŻ!BSJSJw. DN6WNUKo"J 9O} pi*)e49X>bj5_*;kOmuS- YP[ ?XMp;QR%5*7np=PW~Tyմ-0Y (V/ǢX/-X(&`dS[f{y`^ElkW~_芕L1M++)ra}sO#Z6$/"oQWE{l;G9!g/Kqt6)n=\)x_+8&.D2ƵL׹baR~xTw;LHs&UnT*( ^f,OS3$oVH.(u@,OVK{˙ۓ>`-ZGEE֚3/+>3e%蕩8ISJ|$*œZnsbD)wJ)eQ !ԓ#9P440F#@Jהi`(% RUKEqCʓITn0R-hWG$,p IF-^}y"X@:ۊ,Mc#8P6DRlKRPW`[c(M^KKsK= HB<*0*%Rˠ<4ywY-6Rwju [ Y(;RP+$20\)2y tn7dc.^&C@}62 ]kLk0'V8E)@(TrS} 0o r.DPL k #TZgH:H c'&`J`W h2pIR>_E$ FP"K:w 8E1fڣjD`ޗ3JV]sD2TkCRrcPcK<UR]!ϲ_Ua}I76I-zSen7)|-/)JYX$KyZ&/ea CaE &H+͡`>槚?1BL_jdGˆvhz&(Z3ઐAqOY@p'xZ]Nl+ܐ 'Q^7HK҈Ā\(J֐IqgҞL ݌+4UWWpk czSDH4r)C+{TkrQ^*I-D̶l5H[AsZCё3խͲjmX)YĵWaQ:Gb%K$}d3_SP!#AebP/F_pk4d71hvYНPdn&(üT1 o斍6-k\!#&l>Zg-F"3_ I}*2A$wؔ(Rdo^1o~felxt} 5LU$1v}>I b#gda*T\(LDX $OD$1ʨb9ιV8ӣ||QuC}F] aKe>"jO~a(şw d c *t@{`D}7J5hb'Q06!|aP Oυx0?}F c<A(o-B9| ۷FvXY^?s 5`.ȼ|)&614UjBhFѐur\7jWzkh/I2iYi60F`we[n+5PNk̲jcab+tS 'S2In:TԜfh9k3WR⭇n+^C[ۆ\eHF 8MA[KkR M]6<*e+[zA2|FY =< ZQQ`mːݥs3!olo Mb~09|׾ Ĉ /Մ@HeVUJiǡAZ 4ua%0۞V%tTS4<($wvQ8 za @ <">L{-tzܻCw0"n- 7/ť;\!i,AiyHv١TØWIC2$ 9x4=12Jg8-PBdC1jHJBn`@B *QW*p^RzMʀ㻰iQ/řp+{RL} Dr_WΟ, b{7yOzsfs}Xީqz?ʢT5E<:b:q?n0/ ,ֱdfz5IR/Wb<" IXD1E<$)8x6:و,zlU=>md0J} G@DaiW1}y3Rٌ]bD.v6?E 8˳w&f+%iLeTmAGL*&AB2' fZq& g8WSeyc0ҧc0ZsHM,bJ`vwͲl)GH'mo6 A;1Ia{2X.mΠ$yF=?GS7HƂHѝȴ{?x'y = Bʠ4"12 R "<2sӫby R͊ǷjgNXgcɮxyu{ y͘!Hv쌿>[ɦ ѭޙў35*O>نdҿߐX { yN.g8Qsi\ɥKJ.)Yӯ,&.®c!qRkoB:NYSb{ `ݞŹ]YR_ ǝ*6SR噫[av" -%#-]5TxrB7c+M50TapȦV$#ˎ6_N/x ޴ddm(<0!Ew7ŤϦa1.Ấ˴ ~G.e>ˋ=O4駑{s{~|{?=m~n~dv $ zZt99Pշ(eam#tJF|ony{Me`y=Т$w`ÿ?}~^c&ZO~tpSޮy$jɶ]enZ蒶4]WkVi>,|z4zDngLrΎ+w]KzFVJ/c[^G2Y˘:Xb^eOƬ9VBuJ,Otʘ(tlƬg +75YN}C"n+ր'_b]ʺ0&`P8Xɝ|`g ^sg&5 ge!y&5`RMNe)qեNl+[ZgfcrP`b.;1[ʅIej<Ƒ8 .l4 3x\Pwhzـ1`{X4]0*BQ %-y7?#&9 a0]+w/tߋz!kNjE?vˈ햮a<1] `y  '4PQUs%RV)\ 剧gp`y8;5|BYJCq1*$Ta5#{c0Ѵ:Z|rIdXWA=]j`δvyFIFhID7/hlZ\Ngb1y>r;MHi~`~z Rm.mn*]Ge\{+iFܥqg i"f E ^ E ^3x4"R,$mD:Vq`DZ6QmEì!fgiҮۿ3NJqƗ1oFzTtxwkDErB^IE0{Ľ*BET֣ kv4 AZxr @ ],}Vc)jd۹5cŇ5r@ HSBp4);xK%ZlsV&JMDz$ͩsVe 4ِ[/ X(Ï cbb8N-鶍e^^rD}6c6FZYe;]lt.Nem/pJ* GCRJxÒb'xp*PEF(B.?P#m)9H~/u(_K^q QAEG&baըZW$ ٩ے$ρbюpNR{ֶ>4F~k2X' .A!;GhE`nX! /Z ?(-@.|eMY-A1A˅?-ݵ8yt522sW lA,NHV'sulAoӖkus K2`x=o(ZT'c]boYM pO#B A bT9,|RςV[nxkAղgujBJ[O>&I ǭW hT9P GMs6^apSudRK{5j>(ȘHܛDBƪuƺme鴆<ő91 ΃e8C0vO3Z"e]8*5%hC~{u B_g0W5|J0!8g:UOadDdr ]` cuhp[)PZ( сDCPPsHέ] H|jܭ(pX^qPimaby*@0c~F.հˀ+Am@(#N;;i^wh3̩ im7a2޶IkL"T .G1>Z#6ɛg4?[,Gؕ2+$jyP03!΢GXǖnDU|TIZʂ]뛫r $^L^|vIdۑiZ#iF!vpThNQ,TL2fw;w-afΗ Nobp"a<~qaq i*/U 14֖+@E+$*]iK*`lq ra4>ssn=C`vs7U5TV֠⒴M<}s+`fKkzFA1G:;=)Pk}p_݌JwGt6l⋤ii$R fa/2 Qp3&[rMˀ{b#1ު5oj2#- a\qg@!"ԡ܂^%ZxcdXp&7O+:ܼgjFA߰ϊ)f o[N-S + q F^=948No>R\+7qє6Tacgu6$#ߋ`ھ[DT(lW?&Ȩ)c؟%*r/)sxgu6'ײ;z>{u{{SU"¿Ipwě/GWMI:Ho҈XsI" J.4!hc=nRRJմ,sP#ܡVwgͥߤ^/ ;x=65Y HH":$|s6ziW}[LFȦO$W;X.l?&i8ڻ`tH/x",2|VXib, 2Z9xQ0F` [HEdrTa6\Ns-O\@{v3"vZ> P-Qi|@J,25d hk 95Wԝ"j!`|79PjG_ބrڨ+:IPK |`9 OITxR[4_4a|#js o**sB*MjP1˭Sm8x4)Jou>1jٟ)-YI;ro̫ש`ڿtx2j~CY-lCƊV %$z9Cp-\^poVE Z9Ȼ`9D)M>r$kX4"2maO6\R{-1btPTBЖc|K!Ra_T&r,Wz3޲ooӐPrR-DA-E`W3RnאXX7*ͳ~,IsK9XlgFdKtdy2*!mB@[N6@ȉ…Kk*Fs0,~0D8Y*A7! ]Z;|ZeSwP"tK3.,†{:,TT(G 'SWi^ПF=_>7Duۃc|?.Ԗ~$!ҕ}pUSySbL ˭͟[|6aMqz¼wFޛ%A(RaѨyR}ęY>KeKԩs!Gfܧ;Vl ľNobJZ[Fj;DHϕ3`I+ -,#T`Q.ĭ){/Zi%X _@ELv 8@ܑ||5x*`=m,I[wM$QVo=S &&|ꔆfD+`#M?̏NkBM9w\֜~Yc~-:Tҩ,~}BJMS<'4;LYѬ?%"=XBrJI z?mo"{\ޟb /XSO`yoV>9aJzfh덑':(6J*qe' UG[ ̄; a3H)\k A)2!5Hj8*xyr伖U%9%/PD&|,_>eֿLwg@ܯwT 짌xv]:*(la[1 翌WfΟ:$h4vt8RZ)A'[BTtK(5i#ߛh,Ɉ+/Aͦ!̉@:TZӶ?_V OT`w&~^[*jM OKGFK*3ahN?$MK0|-EeDa5AH+.~\X\*~c~f~v3}eue~W\]=jK|K eEKR0ka:™LH1&o-L1s"5mxjS&RRj#HQu8{&WmeHVbO $.}0]rޚnR~lQ ^`.Cx4PK3`vX9)D[Tz8Wt[5]` A%hGF%O':l:5fֆ :n 3kƿ *3BqwMSO  __UF|6%I[Z9ĜaVia<7Fۃ!puYβ g+ aё4@Τ'eҀc@"; h ŭ}ƨZM͟|Q[6`H JZ]UlO\PA^á{M:n>ul=sӖBuϤI8j\R. oḽ>dm?o7Sdsh1&Mi}3I5JJU)m WY'Re0<+*ei]r +IJ=s[iٕ4'E '왣'Q8WrҠp )w0L2Jo`47{,1LI,Vn8b!XLZbʄgYHH d"rgk".u7jۈ!SXj:\^B>n !5QF, @,ͬ \6T$FdLP]ZJh[|[{*\XDu Z%Endm%)ti̕1 ? gpaqa0B5̜`ff1th& /C Y&ِΚ>LY!oC۠Xƕp/0Vao<8D5jtü=)YGw`)4執B\ )g ;\FQreB  QDB2ႴL#9qLST3*gj&lY*(! k0G Y$zB\)iH x︚_B W(ם$5% h_FtgVw?xu]k(_FйрBl4* `dd88! ;5õK$Z̹ ͥڄ׭ k`5_;U}YMɶeoHXIH/[\7ܐrڲ\2&$,U!<1C}h$y ~μ\k`Ɋ̳MK=:H13TGj5q\y,ٗjJ_g%:>$&f'@#z|du?č$Yeda {MXp`9YH}=CjIiD(j_E2>}/lJF=BM*PeT{l^_G:gjpֶzqQ(U8gtN!Im8O/Dm+YdԠ{ &DJjK*3ahi\2hU zӂaVeأ߼YL_Y[UzxOۆZp:RBG8PbEZ@tgFb#Bog¸-+.[ƒq=uJƳJ}z]C#Y[煛=*llx|`>OW-Ho,m8pX0o^x<|yWpCLъvj(1khʓZNuV t–%>#r+GzMܟJiao X-zr!YD_mQW[CsѬK})+8Vk]%sZ^c+{ %iۿjH2.%-e27ѧ;uJJ:Yyޔ3Gmff!]r o\9;CC:`Lv8:'aCPSaPڎGz]7HdbuC#&{C q$~G"l=[yT=s A6溂w[ voV3i?v`Mf-S9 EDJ1~0׷~*4輤b2`ϐ͆o*u%]@^+Yq<ء*m:k[rҭ* Nj'gWV (o[w"XȌo*م0ta `!mW挩 όqLh b5>¢Lg爂Ϥ3.k {)3P[JԜ^;d 9mlkv5laEِ۟7bA( R|pPpk,~UCj tW/+-[cJu yPxxqg84e/U#*}.;Gyǘ@ֳ䩘Id8kӭphGc9[y4sΖNF{8+='; KQ3ҘΓlϑ '63{"gNǷHtwIލzn̒ e<K+m]0&ox7J#q燖TSk>Zcd# тv1-kܒ:s +l~c&YRb-ZR;oH'u =NGT6QB7&&(h=Mh<0LTIVࣽ _m*OB)jM3&zBx۩ϔm"?Ʈ||KFA:JxStgiB8=9힟|pkoi\W˽ %M]@']?zXkǛ;7\߱c\:~J.wHZ<:~5J|_U??ij3a<;ٻͿǕX Ӷ+@d&I~ɳѸsIS~7L(KwJ[=/9f|9ȓ}h94e=J,"3 ?eOhh,GZJ#d }š) Gu.i"~>gNZc6~ȒsQǧGg1oCBĈTlғ^BD4۷g~|dly&r='q1:-xr6&ϻ'd8hvv 5gE*jZ9f [8qn߿^]-\[JrGߝj;r~+\%GR8{ϲ'Tdg=W-LsIĒ `)DOQa-TK* .D} YQXoG}dyscydF3j5Y r왾_ ;&(ﰽHE&YAZhyt.VM.:Q֠86'QYHXk3d}m6#gR05=i=48}wѣwwo{jcwgp~aM?e+pWǻ>-Z$ݍYh}z8mO\fI6O)RE?(#@LzQ@1J>1 [eb#M &Qs. gM*{xqx{@K,%E9N|U*Cc&e0GEzmP~ ˠV&' ZfҀ1ͭӲN3tX]Q$au}aY+2TH+ 5kS֛ 45F2E񾆔[pCZ`Ykk\\͓dMߝwuA [_7+Q$gDQɚhŽ-k JIgm,\!Ô](E+nT&io6y`˾CtPDl]"ᵐz&bT*}i =DoVS(='gkO0\/ch *5Z\T?W +L<3Ee$٪+*j*gRÌWYK&戚$Oj ՑQʤBhlEU.{XQva=%7n8 1>뜉u &7fpe xe̷ M7+i2EAN d(%kM5v 2:C- &. JU- )8&$\bY(Rv(ِQ4mVvd!`4_g_S"L#xxҁeε]e)+!VP&Gr˛Hr5kZ3A$n&D,WZH-dX)9bE,爅e"d6#\ .{Q9baXoO`P~s ˜CGD}hݣP] ٤ke5 M ˜6rf>ifulu|uZ2Y֚/r[/DY $Jbd6)#+8VԒlz2uWkنM dᇞk9NdB$W%ζ+":MyA4HT|Rޞ)ZdD94FT髬E-6n;wNo㒮Bv# -B7cv}UxޯG-@WY\ty0-}v1DmV#Va98Σ|:I1n`w{n wlP`G4H&{1^ ൵S&i|LF8Mʣ7_ޢn,,4qհD@::AU-\ƺ΁ΑcrU7o=tN\i ֻ2 KPlo6=v_6ފ z%܃+: !u(Ș*h ,l=*hL蔌 Cgv}Aڕ9 # /l*oLuzػH͉]١ԑ}M#L&aj<0h (RI*n7Zyrw|8w뮹Ca+%$!>f*%T* Ni p#. f E"UhKк|UiZ,TA =d%SXw)! uqrpPeUn$;J^%5gB$ Oz0EM(LM YZh0A cu,=PJ%7hS;^Z*V7J jr*dv0 vوeD[!yhF~qKCd1nV_@PZM>K.$5P&WEN B!Ad$clq{ɨKF dzH lioo//#7FvjO[P@)&odE{`u e@R{fM|,IQXX^/#:&'{?Qk=Av=N(0Kdc27rlDOZWk+{)D/9hhWXzžקPmqh0N,ÁV>+a$e_4>99<@Eq,ցtE$pUtO*NO,$C\ 7JCg[ 7 =JbBa-UYyTL:c4Td'Jm!{qBDT t2>*deO5P'W/~PMn6Z#m=4EƲX R}7vO 1{Umvz͉χ-G}3sS qwYU!CpEO>è~|޻:fM?~䃋)S@2U-p`2'Feb}:h:wޗ7/R!#Kҙ}D9_eg9Wk9^ԥ:H! XC!$=(䓏Ǔ:Pp(T,G{g J:l=&𥨏tJeVMgcK=u}۽c-}ldWU'\9à% M2)=A"3IICo}N$ {;Y9̑k>p l8uҗү^O"ls9=raO c㔦 &dBE8}{>9ԅ!3^ͬ~yG HMxQ;LgR?ސ 00#$ZzLg>+:_cY֫^c]ݷEv2~׬9u/ Igd7#5#;"}Nΐ1gxFv3[<9 ;95givZt43ݸzp7#y Yuz<ʌfdV1!7#'d8nE9Bd9{G3ݨ~#r;yNhDaON fhw9 2wZt8{~u^귴+ _$[6Ҹ@xjVqEV#z7wo~c nLwuq8yY! R5ı8%KvKm)3\euS=ּyjyQ/(C’|w?|r?y;\߿}OSؒV; *E'\dS~xAټvyU<޿'']B- Cҟ߽Z:^}UV޺}Zn[z y_X? t Hl'L? vXU鷕+=Ty⪕V8MЋy+2]ΐgJi2y:3'}k7pabM9<-:8l信>)Br/k/.DzZhٛ,Xs_}~i>htv׬rt~.)}1+"Rә~spYp VʐX=f.NMuh#Dg*>k#]`AFGQ fhJapoч:XLϰQ/,Wk1(Pӌ+Ta9 YOLcpFL@8 tL5$\5񀺖Rx][4IQZT)h'uLޯN޽{~h{8~)?r?idl{w7}Iy,d|@ Ab"{ D~! ?s 7."_fZwZ[eݞ1]YGi&|?'4o5'sx 2BjdJS7s*$!d~VSX\'.3Žl!Ay)p =*(5lsh{{DLAI!GRsxH( 뇋X\k쬝2>,L%8M?$SͪG"s Z -dNJ}Go,Z]?cMy${7<;Ӡ  5` ^Oq~8#=Jy<k<19QwY`s87Cʼn\v }˃sŹ5MHc lɇ<ՉT{s^BC\K *rPL)Q{(~4Bfg!M^R$MKe6fʌ% (}6^jj |+}nW5hw0?Vs;*$1FA--0'05J:a[wS؈ogkuLs)gj+1RSo0Jm B_ųm<1))yS,,{T4h3Sto{zQV&/4,+~zq"ͣKR+<b=Kle3xM=qÜwN}{8˧!χd&Yd/q?55z.,-=C81Wxcmh^_A y\ML߼5&|n>3E91൩wm<ᒅRk[~+͜tH)g 3X36+/8'1SӛyĢ$ɜ"Χ| r+((oɵx)}_WN,\&k=&Laoc>? g Wf'ֶ?`͠!+qaw2^yYo'-O~0n2=<|.¾>Y)_Z T@/974ӗ4Ӄ-z v6˝ 'Ǯ<6id;z+=ƮA^*x-Ãe%lNP[Sc@T-p> c?iTrJN m-qdL y{jy6ڋY@gjynu@sqT*QȖFcє6Ԁ g#U0:y/M2mV4` 5sPi˵_/8sMj]CkV<8`NKrmHm$VΥ9hFe(*TjL[]d͞8߽^wmw[ @ y6[<,; AN{ ׌~.Ne o7.?Ơw99-?ꟿMa 2ː\W] kz ,'lJQ̠Adj%5#n\>ZZz :&ݾqp<0Pg1 '0l1d,gkk/!e՗Uel!7(I5?ZI)Μ_ 樖Kuqi kX]S69˳ xuRSB` R rԺDYVI97AԍAk͋XMz_ah\ d} r^;r9@w>kOAI_HD&_ -Aިd*ې] Co~R32$ŕV@FPxUh* C0m<E8A>=&|KhYa VZA /%p>54;JYfT:o]Y82tRkD9jEvD=܂eZe͎B~r(KzyuKG䚎L`|fP7H )-j g@6g]*BqL|i u( lXi<[b W7Z"y;;*y)xGeܹ}Tt͐gsZpv(Gٟ|x/ g%[.V^@ _!;m3?"z$$rx`i?%#s&"ovLG:g{πůs5KesLK¬6݁kxqTR4G8FST^-xƕܴ&s7qFܒ/RWK9PfZkkn#%ﻝA~ݗ}ܪsV6db$g~l,%Ŗ;Ib"A,dvM (JIo iZz=qٴCbEEBz$$:/kZ߮KX/ ,6iJaäY/Qm=΅jWJjkc"H!eZ/}`HPcxN~!A1'[+U{PpXZֿl='[7h9 cGN ]q6 N!`AFapG:+vugKasL*7)?r֓uQ?Rmd64yr6La9P‡=r9 O0n%",*6&[ 9S08p =!Gdd8!>)㳝 32o'>ۼo JAZƛ=g}t[]mÐѮQƊ=Z#s!+WiU ˸tmh*0|x@]7ΕG%>ۗYUQY|u; Pf UxҾ&OA߃Us͹*?nW'hKwu.1WC%okI![=⑎uplE7r_Y^$Gag- gGbƊ `Yt,Aƭ̖eni]weD8L}WTa 2nW%|ξA7k|vj|ܸ$_ASRkv({~C.+r.˩4wo>s؛T{}>;1 <8#gwsm/ϵܣuỈRAdl_7jw+vzv^w̌#N LD3zg53{!Nc3{"R!;3ݨ|cp!^9c7;n_G^ Cp<7CMϙ |Q\qх:?R@Gzb;WM2}G{"[Ol>{2HCtVe&֢e ⁻Д146P2AΔR聂UJb3C$:'%g Nb_ 7y~ \J]-wIԓϹ>CK%V]|HbDN zJ\Uh&UH`Njb^45ܮK(؛lH&9{L%!5(SBU]FRk\[*ֹo5ELH-7dߛ8R+W``L`<#Zb4Pk ƦTZ+* l0)GG<7%p%4 7b|xH˗7ŠTW^: b^2 R|1v*=Tu+{kLNuQʢj)7gCW}k fS!UI*y"Qm8G:L5AI$mcTGx탅#D#$vܽOOu껯Uyu}Bybu +)t1t>Aen}ԸB(U"2f\I%%j+ŠUBG'qm0@lSܧ.^eȡ9m{]Uh~+TE 'YJuNHK!U)^@:zVkUW~r:Nru]2ݖv%Xh=l ՋD$Zh{+Aڙf('A|\nTaISp_-ݮ%Yn95TBFr+,W#i jJ]mNϵa+CWe2'8[FXn;a$fA/Qg`޽܅^Y=Z :s/8νݯSo۷_p8NUyv%w28ղI1'F}E;xV ԒCG2EBN ESpX#j`R `}+(.鳒4њPD{HT$a}~:5` eFY,u}RAJzk;(;H[1D ԒmT+mFcM6<bz#6pex]SP a]&)޸Ͻ~щ[>fQ4=>K ;C+tp+mOO؁,GZxg׺Fv֔S'M} Nr}' fQU,.ʘH9;2y{>9MCԜN馅O5 ~,і`4xX.*3IEolC۔nW8BRQ>jJLIy0am[L>_+je[ K :TAr Qt6p֓Dh>3)LtޒWlNQԵ׶`Wi?+G0Xخj# ڧ75UƻEzk.3NDaNEH#fH_W_Ӱӑ~b*}Å{6^n OOa<ʐb(í:w}I۷ aU,;gѡe) {iLzzf<'yW *ņNi&Zu ~2Gq03,Vgc;q#6ᤂY1>_ .+U+ ^R„w{E7 G*s)`38xGW _~<9\o6C04aNDI=^ճ^;O"\BQL^@M5 =)M8!8jߴ}JƫZ,몵I[@Hƪ;Y;Vk5:̩ąI&߈): WV[dAލQD1Hth*㝵HcS6@?:m1GY[m$Ya8r* GP(aW{$Hѻ/lMa2y=6fg2p- *^ 3=!GZo&3 %Z^y1(MhAݱ'z&aθD\ekyGFNN\"vW~"-Xsc]`ǀk);x&;/1Б/Zיb+xpכ, =u1w-X]ʹE[TW_VlOabګI鏖]s\0z1FIr:i%I6->km%] ލO&Q;J\?Wy}:͋7]HJeQ䭽x~S{CmQͩk{|r6Ӄtֆl1jGBVVZ ">ʭI]0qyƾljّ`QqwYp_=Ȅ5G&ҵ$A9 3 wB x~R"ncN@jkk%ÀIVwDQt{#W˶h&q?1*2;vEyo"?7w36@ EPl>kk7̲!,a*/Qn[퐝WH}ݲT-9 ߃ٷz\*n$qxouIREa$,y[ ٹv;7 $ϛ ]xZ~u?~1 y9尰Bq#bt t8y9Z_/)ܵ7G/6rN}u<{Jv} ϥϋLkp}YQPRu)HM]SB3٭h\XƖܳ?=#{nm07 *%]7`̙1YX$|F1ᎀ\MS$@d8]_7g{߀7鎝WSZG7ysv}d1h4R]H,)r m d2BTØQ W_ƀQ fWHɑ9:St(ywgl`-6xƜ:Zbdk-(9Zlk@h D~zp:{vk\n[R7J'zU8lJ75ߚ[59&[ѪطdE] ~L!oŔg+z׊>R`Kv,S`F| lnv͚A ڽNhȃҾoکc+Snȁ ^73{*REՍ<c7nv 3gh:7TN1nB:{^   ؊e1[i8Cڍ7 s=@['ȫ?nE bgd7# 4;|OW6)}-]_l4E1ؒ`۴AZAb$$-6+DPsoSXM& >>> mw"^4U>883:;-M)Ӽ9hŽY5 lrmQ]mo9+? ^0Hrv1}9,nvwNsWl-ZA>DvYE> *MaR3.}j3קյv,k<"ZqmU ٵ}LJB7u^Evm){ںk\[+F1z1/5Zm>B a{s_} k&jn+~|uGt+؝knA8-2_7g|u?GJm'. QÝj櫋OtSuɡC$ (a1ń !dj.ڑذɾ¡\es"X?%UX>8R; ?^Nvm;,@A֣c(].Jwt@p|(Ark#+f@&Wq=bd2jp>79RAaΣ!aej]/AUvIT}; j'3OA!_/)C#ݞI(}럶H$2&Wj aUu@E΂@ =~z9y@E& B4U(bG{;:҉U`j:G, ` dG7;}正~G`觷o]| H\v&6W7iS?k%ן#^# _8LNs]=i;5T&1jjQIcx#ugj.&B֛L豿rVۋړ+.>]T*_qߋSsۏ]]Kae?KoR{03B0]b0)@is0@9˜fe7a2T/|$ݗ~|#랛6J܋o\HřOS诟r~њϗ7ի&Um64)&IJ2E@+ރB&A!lvrT-\u%SQqߴ.>"ݯɉja*Z߉0w?i^\uXޔg 5kĹI'>TA;/;dkX> kD6VTr~|_}<"(ΐm.byLſF, yM`vsfG4)q)' 2s}c9gs\o9Bh:KS}Son~<ʱel*u0TZ_cuvD'OȚ=K{m7dr^e}%O9;#kcLGG]@@' @ȓ|nI>4n>$ed@:gh7?]2/[}n8I.uNHU*ԙ3) Nv&ՍwfwOf𳭾s慲5K΅50R XA@f\%ǏU/e.+tgz41f iWrl q%23:H:(7KɮR:Ι`sXrG’ r΁2OIdK㛗<2s.F5;]KX KU!BX<%?Ƅiu3+UbT-Asr*`y/WU+7RԂKu@O$SS= }Y!c9rL ŎrԁЕC$Idv-zL2m'ƥh#(2yr r$Έ}a؉a6/_Wr_ۙ[qWl)-B7" Kt{Zm8ar:aWX_Ŷ:@\߮NO81Gxsqޞ~KyR&\]\(Пī1-j1{FR7.V5pRvXz0|NyIYE+`]ڬ,*RM[S2|^gȓ{-">ؗ.2 VѣﭚͪI鐲+l΅{]FFɭ6`%$z0P]Ɇ:qW. B+4uSTzxJcO{,۳.qb|`7?fcQ윬}h:ȖqnNEC.3xc 4K@yK1Xx=^&unm1;`lJ8Hvᰓy};cIruӭIbTHKδ9`s݅s`f[{@SĄ42>38sbΉx0nVErPn~?GWa+N->y94wB$SjU뮟 1`L]q3`Kl*cԡ'ʩV3 loetXku1 X1΄g_>wF)[ݧ-'!Z.*o! {!B&V֗6FZj(-Ue*WDڊT1 V-8愛̧$;N!r1Ky)r9uSG;ZZ]=|wT27KWFg1%TuتH+㭏 U XsٸQBL r@mS*\bYXڢ(c5QמcJ4pp[q]ֺC-떪"` 9* Xx7N$gXiF^p4VNPz8 2}X=z ֺn)9C=J:bidid,놫ZVL(]KMCk J1a0,.;D -7^eK24z :{ꖖ~=KGmګFs(&9OxZqZ=]9T`_>zVr*txIJ" 6 Iu#S:DFĶM~ESV7(ZXyWتRkBӀ2&mɢ,#`Ѫ3XJ%bTZv+S [$mHlRCNm,VXo2=DvahiF>p4]jZQŖ\Q'nhР0`Vj!V&%Fu)Z͜F҃dTê04aۂ Jk#(V7ֆ92bӈ[ӊ=Dvϝ< OckM#r 00\HZ2SR7_3B3Ft]朱}ٌ#^5O^(8"/yNLNiyUzR'su\}HݵZYg^[;y;Ff?pCޤ~ϊ sOOٙ͝cG8?߹_q D#(jO!p6TR`3}{ v|/J_ !E|Kb{SnL$xH$ Emʆs0VP $OBYtv޾tN?WHlskX} Ȟ}qw]iSt:}7,(V]\~Ik?|'_ˣⳒ4Xǯ..nXīQoM[o ~sMnx]$uۏ_^# )fV>_˧1+/2[]p #@vUzۆxSX8kq3W6g@p7ԁ4Wt?^B[PPƪ-}Uy14ld3Y7FT+=0 ذUQV% bͶ"[e}cAJG)CtﶧgR{ 2[Ʋ Ç Rӂ>M[y(`3h(wڡ0e&FiϪm*Ü9Eh's {c)^j)Ǡ[k{1]*91NRKΰiƇI.;Rv:I};-.'o3Ius}^/un,'i.j}9$힤8d{1Ƹ@N`%*blQ d6{@lv|c=r*paC*8pR@ڔKck7I9;B3 }A0Wޕ=8Lˣ/1X(,̔'k}j|d\>7n.5o}Ly.J4uF$sە5"dɜI.ffux^(YM<|OfY:wHάS'.ވZb  4'{w$ 49OK16 yrSmr6PNdoym]H=;?ss˖yW_֜L0b AHvoQQA/|S)hm]Tʶ %("a%֑z`a"9U#"aOs>q eswe=[XŒ.})FP0W i*mp-خDzKoHg墲Y]%;'~1ȴ/XR& S*7%F[)XB ,LO08F$]gmlC u pC8pקѪ8vqmj(RIz栫WC *PU._51SzS,DbBzMwlM#uԯxJzطNA%8\4|l/=N,u*`#1|I ZÉ$;+^g{-OV!6\A5_6oF,O/e5LIzy~++e$!r񱰏#vv~QyM =I7)z>{ `Hd رpQ1h1cp! Rxr^ z?ټ-)pnh,lEeNqKE#=.P͙O&;˅|i0_~bH`_toDgU?A ]5*L2ED!`zHvVJq֩Z/]2j_m"z ҿm L0ģך<'坩ZD+7Ԣ@܊}z8ZWA#ȫC z.ū^'a鹿B p&oQ.:Ӂ}nf9rϻY75mLo_i@zE5G. bgFPc݅dAIqQzR(|A EJK(7Id֊1FEM{IZH=9cvVSw_c04a2xxH6~$ǧy.wn6Q 2Ɉi?KDa‚2c J3p"Ȩ4kE/ ؂ƒn,U) E՘G];hOǚw!<-{U^|k.9%3Ђ A \Jy* B*T={N{+ʻ+Z(kWdAg 9G|\fy8=񥄝"abm}zcB~xd8߆ &!7+t7D#qI 7vk<"E|==3݆9AVGtm˒,h: i|t[eqir1Ι1]?idϾUS#sJS`qԒBMgLqtW+M$=M>riY+m*n+; v)bm|uyQ`G}Lcr ׏W=?0=֯)s\%B|(&0k2ދ&13EX,[yiB&JSŊVT~R`>QPvۯ> }O T=iR`]k'W:9V_٦WK3 …:tt*CTrrmnxoȣ CbZ۹Е{WB ]n?=fߞN``~5&XV"]ŠoAh7Yrʅąbd!fQyaBj,PjSZ#SL2 a^)ܖ|uׇDnN^@.#-Lur /ΰ!(Q#ڮ+ٜihF}Շ9q7,9/ ++0N&N,xNj'TvSj{ʼNLs؜oQ(?^4vT$xerRw`M;NMYq&WQKP|2^sF?굡;WuR9;3 =ہ+|y>˯9WJiY1pL!رK_\JM(E}<2c:KrI(FLL+! QFv͔0w?zx_=hvaN6/3ْqǗ9O9f4n -jNs_ϝH.x KxJ(aZ#.Xgr ^J K'mP 0,l7><]ߴR nd445]IyvJԿ[Y-VƆKFHDӑ#[ŭx!1E6#SAK솷ـHYlܷEc?J&\INA|@W⮴kvvUJF]k`]i,~XNĮŢNܽ6^ ++me!qx8]3X / 8OGI7)&Tgp}G'dۂC i$ūyAfJ)`w0 so{?ټ-jZ]=H~%E%b^(uB6PW?,/_Zzz295= 6 ӁVk5/F<>/ ARHq\7q\7UDu_L8A<'V{n)3QGg6X8/0гe5ڗM_~Rg+BObq%Ai^l<}˹˻|.B-QGxnrQ8U)z/|Z)v$á{2p~,,ًJ ż3jLS&R9(ICJ̧PsvOѫaIxyM t&53*!ݮ}H7FzNZ3T^31jZ[D0f@MVt!jGLj&,A B:Iֽ.,yCWbl X<< lnc>#X՝z@ m3bvRhM+eɱ:Փ袳o)(0w45]iXcF,Pn'.#@b^,j -6(@0*F ,AzB EkgDԛgܾ-ʿpCihZ0,gV[ <(&PB##)$(`8V* Bs)47:@$ogSw#<%NjjTL,ݚp2+Lz34p>LF~4k[JTսy'n/|,GZ4W5,uloT+Lz@w]>Ofpp|BW{f\ι_L]_g?s31gZp_Q U)qtpbvD\r]9wVg]/xφx#a\w4$s.>@;1jt.C~]i1\f[BQQ+2Ei~ n{k\Eyk`G`n)}& F͉PT! k=(:&g^$ Q$PSTBzc l-:׍0 XշxvkÙ$RVzTŔZ\@2@J(QƗb? ʮeAV[[w6LF7.\MƙEL ,(/ %728"ĩ9W=xRó~ڛS(hxW4h: Ԡp3LOoə>9PXk)J?vJ$cYHJEĎpĒ,Ws2'SkV@ nRƛv AyYO3w#P{oGBB//LdQύ؁ju,-/+K4oفF-l5}+M`Eۛp xa/nmbk$Tc(jz Y:# g@;n5>g&#I:H $`e&3էRTo043WmkW#JQ/|&;Țz),/!dqF / P1BieL}>^VIJ |]}L1RJU!ϴzqt+\/y_nI0}pZQil>>3w.$cǴaǓgJ6"̇C?xb#"6$9ֳH{V:(!ERb3(~UUuǂG NroucB=sq?"wwuFu=gŪ ުZI%#ҹYFqKv-\6~WaEwp՟Gxn[qן!LjE؟{ċ.aU|nnH};nEKgg "0=.F]_Z|;O7>M^n:ͷzecp=0,jv>8|z1(גiq7><~\.'z_]K/MOotonLo;4^C_kwN[ ߮,Cӷ3o/^|Y?*&ˉ,PvҥE$WwsO3|dJq'rq_<š", *7O7ْVyaAjQZ[c\5rōQWkXd9,S63i;MQaMdʅ~ٙ}O&iGr/}[1\F6iSg2/Ml~2pz歝{FE.rp]f+/Q/ȱ3IaӰU~,DRNL&'LuNZAɟC'o9iIv=IZsϤ+e[<^pi?*Lx00tJLH &䞙VdkE6ȊNikEVdkE>fEcGy^v֎,KuiIjM@'.,|AR]+ ñ#0[ߏ+AثJK;~_ ^e|8Ψ\6n o3z.3j=eZmX hO.0BY* 0(1=)uYG7DGj#-&DH9>7h{3#:G@)Btޏ:b/D*hBi?6hTiGvtBQٳAh`i+6PS>C^d2_a"D:׶ ]jc-_߽`XT:9>3BXD3`i H'vݵϞUSS)\MHz/NQFR].gU{m$~[SW6 URu{nlzC8ջ7j6ߎ|Y1;)%KSy8#*$)k0HN5 \ENKBr+rcO1e` ketnZc^$+ЉVD^O'T5L_%/s5gpOeSRTLsLZpHg1"mߥH>em4*|Drx;Lr >Χʁllt#n>,R3ͯ%rqw4ӝйȜM@P0I_ŇƉ'tZz;_` {Xlr#Nj2;?GV"̃JzXq|<ջ tݞw TxHئmO#J!?"yj{%7v(߹qѢQȡN"0$NA%j%>BZ8Bǒ< #l;7>{hm*l_$Q ft <Ghq˻XMZ,dXQ"ߞTB#nT"|Ű0`n ;$U(ĸEp4eS\bԔTûnv47i?yƇ<9/EG(Kmim6h=Vp*_f2|o([//md8ө& 9,9":0jow`]7@ye+2wZu\D|{bEzqUw6胝t6 *#o(f 78BS0 u1FR[nINT}IMon6Ir}%Y"*㝲/uD; -wJ$`E9c3hS/mξ)[jD3Z49)E|UrSrtN ("ysdvii8KYZ7ҺI[ZiD&,%̗澶Z3f% %#hLqϣ0̖}Q)-ٛk#6a,e;?\fӟ}I3/|\3<&H/8KC (2#Ȅ$ ?BTH 4>&R qsBp!Af] GL} E,K{ jECwVLz.oXtX5&H #My`-8MU\Jm0IXXL5ZPtLZV畠p uً:gdqm]_Ma*Z3=g{҆xhZLpJ%CP~ǰ&:dh?BKxH\p=r +a#"}#;VM9@E C#F}܊DTgAIJNT)" Rr3eP`i}婐JL*eeb&8R`dEZ-Bh jP2+|.  ^"L%V#Z볝<`mClҤ,!`jY X@ k@SʣMoրEg tZݬh)XYBWCi*!kt7Ͽ|[lvQij`# O³>Q~Zl Z/w$޼۶Z1wAa&^s޼x|,T̝[/xN?[qA,u,~5x,N6D8a+ yb:/W9 Wd‹ ԋKf?/)o ~7-@a\aםaͺpPN$ɾ\R2㙰tsp6H Fj%Nfyl[12Jf3l%Pt&9XOVSE`t1qƎG;\}j++Zj2bwU{hy9y6}]&ۋ?7:dS~>t]p F?ļքTwuRTIuFq/ <{ǡT؍NeOFK>\b |cz}ӢN^yk09j(t9 5( J{HR{(SbWۥs*3" (kXg3h>|.a=!+!Qy̎nҮ[9DAROGkQ ѠRd́M)MjȇҔ6R+< LS'j!8fp@+*2$RD3C7J%BBlyL1\PaFl"grK0iBHd|b|wUL|8Hcu9+24HQ L XHP%} *ԁ |Oy㫈r*EF&dɷcH1Sndͷx}q˵f.ӜO3#u(:FFv5I!V ]h tG`38c,`Ġֹ},HH=+ΈILa ==rf1Ry"P8y+!$滜Tg"Brj%.O`N&D¬m3Vb*o;S; |';LkFFA.fPGL"(Nc;EP!a$'w0eShH)=fk䟋n/۲.Έ~R,up߹`;~.V#y ]dԝ$ }oܻ*]V$xO(ڂ愂:S`TIbmA'Cvbmah|uV{VY Q]ws$jk mI+q1~.Qn%ͥ^%JTJ{ ?X$f@r9K:}EMw,)r2M#hU܇gn{\ِ2YYBMׯ޽KYv4=߬YQOe`&w&i{" ~e{#"b2Y L8r;el9Ov2]$ۭdV3v:UUdhqɯTɯ4!i}۷ϑ.?45CΩhO#A1Js՛O6kV z]c0\줥4ԅѮh!,AKJcMk: 5|,cvMQW'6 &kKR@ow;]یJw6`~M;nvWt@>,V|wԵU+|OV=77o?=tf=_dg|2-|thXji@=2^^- N!]qӃ}J׈Fؓ(n<48x_ߝfzD1!vz=͹FŇ)i7_E!?_(r便W!-)I [8'CU$4__z}J$yxaqV)<}o2\0w/s<\".1T t^߂Kv V )Fџ3e,_Z3rD΄Uu1(ݱU7ٓ{?RfIVI7)d -Ejd5N&Y Cs;s V-cT;L:ڭLNZ==8**QNẦ~Z{[-[.w^?8kڠ!d4{AHc`t PZZVcĿKgoV"%v X}[W}p+Z]ZPزSnTBMUHIi)*cK&KrL+CBzRY;&`AsZ{t4!6:sI൓=-;]OdEpJŢ2| t^Os~eDJ3Wk ~xl`VZ[-<-hWĴ1igg˥![{ogt M`[=Ѡ,RҞ*ť!נ\5ZHCs=^8}n [~a@g$hM_ocd@#cYނAt)ycH(kJF,,įDpT(`XBI%uh6k/vqy=sO_Fo+rk7e> W̫)5R9/wr !|i FvπPh=|}D%ޔRVH%HE;逑dmFK}mPyq,cvV j 9Ru('\ɛq$~N*O_2yG{ R$uqY{h}sPvZ..c;?e1g.J͢z#)w#H޻l1-G+1(_9 pfw`Fơml)#`S< m  UJ7Y"=(~;ʺ+ OONs7Gl3mȟ8 zJ01 3`<=o_`F |$rP 'nuy8aA@tG;; Xi@/cJ*2XLwޱmNdPORthzeau0K#)U/+*!Y­s6+Ǿ"ZfEw3}^ \EsMU41Wu~A iJmeK.V;̓҆BUic :)n XRz#y׫X8,H~d]b&h %u),ALvzZN|Ȣ2XYg Qoq2PVgkJ?iMOqfRr%ex%%gK 3۠)cDKm.@K+|3Av2>m]P2δy2}YS]|~*]5>̖iN>,|L3EEqT GQS(yC/ҞlDžHu|YU2 8E -m%Q岴 p*eY)ľB +FZ/uL>AXŻJCvݵ!^HqH -)2dlv5mWkbNM0:cYK*:=7@"0yBup HcqHVԆVI92O! DMJP luG [$ng uA>SVsM({? {aP~-~jP 3XB`()-nqԗ)bE~۳XZVX BX[o5fEJ|c" ǚS5XOH;{_OHzx$O?7=mВ!ӃZ`Ɔ sO zzpUMFWCakmDKEx_8hc?$`x*A+-ɕZPᚤԘT?^+ %m&ՒO8 &W)7rf@Ĵ>|(q@LhNNmݴu˕A~u;1mݲMn]hw9:juSD1X\N7XSFƛmݲMn]hwGQ 9ޜ_jV_Y>C6UƺXW1몞UN+畫|[G_gUYue~ITK L{Z뎹jNE#V3YAp+,e (g]& 1KqqpxNy+;B:^D 62I;>OZRLqLbUe%RZr# IGT =0P(SF/RTEH1w5HQ0U;yκ 8:#ι>yYGNkBxIq Uqpb+ wA"*VhEsF*z1aC+R;ɱ-lpn%ӷ$.pΔ-7 RJUji*KG$8u(7LV[I+J:ڐa:!g<J Q pTң,C,C6FaT^QHãi'@ ehd~4@$&:@ WRpg|EHz XW;Ҍ!gPCylnQH]Juzea먎p͔f(ltŔ* Ov)~7259* J&E1DmD^%slwS %z׆:8uubH-|㡃C;jN䊉1}I/0k$m_F(fqbvރ$-J\'=#SĒ.4;W 懍`i!<Z\N7X6"[6֭ U4Kǹm݌u˕AR%p7Z,L^hrz:N<3wnSPcS"p8]c/Z ./I4:oF07}:6*N4+\2EI-Ǵi?2$Gnއ2F1y=<ޱӪ ;vt=|NwkL ])M9zi{%Em:~<Μ]Qo{ShW=%5j|5r;OU9XA*q^*" X%+)%-%ƥڐ# ]IX2PD[f+3%l81ݍiZ‹u WUeT**ty)lЊ@W^W*6٤G*hHDž1^3 2M Ybw (2B/mJ;=5R?~:nn_]H4s<9sC;nv~\37iцy#iGeeI[&ٻ6"Pn=7]@5lQFdj"\nGC9D ŔVo]lCǻNj߯/>o>{}#+sxCt5>p14kf>xȁ}l?вR9*6vP~<`HpcEFs̮@kd]ZSӏ~槿 >|Z͝mXh-XZDJ0H&(ڶz $rt c_%͖f90NJga$J|t dt ^_U}ݤ-:e[G<,j6aq*e-@p)v`*EHwܜ|'T3ܜHdj0xU/үC>] O+;$s} :}e&;b/OcPۅ _ۂ7jG{=Z;-9>9B7$ؠAher?ܙsX̪b644!hD(ikqH /n{ %Я֠.%9OKyZlr8_ + pb8^ǨZ)T;R촪fI},haoSJiSIE$"p2G%Qjb gl7j,_=13Bz~u3'/C"/p \ odq8v_Kl#фfaVkkO9 W^Q.{!^hʱDWYZ`F_Mԯ ciD Kl NjlKHȔ,#ڰB};66"njQ2YÝ6(C6{A8?H|ua;Z6Z+h%pZ'Ovv$\o1[y`*ݘU)&yCp)-q. 4Rͅ3*?g2efo+PٜLAHN )}(=L1TDY~s5Hq!J# |TIp~st1YQM8--oSpF%!HyS1ci^U᧴ԿsTN|'_y "4=<=i<[6Ӡizd F;BYO}Wb9'!Z2%Ӎ=oJ7m.KRAULZRA8|pAMk{fw1 ZBL3fr,Y#s cR2 =n1s$4U#oC\y۬(e j!b'=Uo(R>"@`6nң#Qi)"&Z[]K⒆:З[ T:~B2XZMvIk͊ )юX,Y<߻3 W-j+ۼ+ 䀥hCb): C&jClgm1qCJ1UPd*=!Z8SBiP y+J$1y ,Q|GTE)= å`A1 W;'!|: '*(Ԯ8uJ*9Zc" M 6}cūɴ6wWS|]=>T{!OW̿%1.j<=FOCZ]2RמN>Gupݣ'֐q_jͳ!tkB.DH,Ӌj3ynFX]~n;ӓK4jNF.Ѯhw\x;$݃ q&P m(ksqB/Ղ; mfn+x=x@+z> 4d;ЇWV@0E\Kځy%:ld%0L Ea (6cLJlj-I0Ƴ 5A{ I m;ʒ;dɄ mC<ߕ!c bksnɇSqXF>|#>Ә1׍)NFdQLOwTioݘf"-? XFjCջiY ZrU"`&oɔ, Kc{J{prrՎZ~|֐>}s>zċ!^<7'wQה-SZ4\by!ʾ!94oLAsmf셬-Aj !`zponF.@h?݉2zv9~] ?dOShU_Uì|\|-\*j*^$bqHU7qkL8U)P`tl/22埫ZVݹo0!WW{̗J&bKO ߝ*IsXy/}?m i#+1e1iY `<%TCy2:Z:ҌB3B\. _/gZY%PdJXԚ_(烤?-r0w IP @8R))F=:4W}%#ތnAG$٥\]s r"8k<7Fk[ .;A+17seٝS! 4eBMr;eH`f.| DLJoEGp7!HƷk1HAٌҏ}͍V[w[n츲7>lx$U#~DE?9gIa5IX|t"Q{ɬ?ν%s0 F]vs8c`DOV{nzzr)Pm%*%''&]FGsNR Awd zl@2r9XTiJ͘DI {kdQ_H 1R:ѽ!7ݰz&Zpͺ r/lxdr2RVNk0&T$AGbAh -#veI 8W{%I-r14#&fT*]u5n:ْtġkxEdI KBbJ޳^OzVO3QNҬA變2zH(T5T+>>'} ) .Hqn<w z ab_W%UT;j扠Oկ0}ZOl's7%ݜ74`rG<'7r(WczLWz $lm] Èr /0JDMsNhFJ*i7DP)-q.2TH5zo*`@'\B BR[ G-Ş_MGX@ЭXvܲXw̭73B> SJFK#'l,gj{۸_|  -]-INs~+YE./I#3 CpBVnn kk(Ps@1ɉ!rjmآڹ.?4;=. +>vRK3w:̴}a>^>0ƐE.҅/1\xh!w&8~?mpgAhQljVX]d0I2)Xwc:tO1uZmE<B)4䅫hNq8xnϺ~-V>;Fu_guͨuK!/\Ect ;A@HUĎbz2" ̺E3kRh W:!W6i%(A@Vt iS8rfge+ c"9E)\ @.gU4o~gOe@/ X[ti`n>\J*x4O$Ac([#N/Ƃ,ȟzenzr)˜1s1ΑE% Q,д7f0-uNr@Rp$EViSPdIltR„0Hi )lbk7D'l&A'kӂD@/ȇQE x,d+Kr,EW dߨc8%,b $b]#2@`D3 PܺYGylPmR\Û\A]+9~%VZh?1,r yo}94'v(rvYjwfm%Ea!$ԁ8yb귃\'z=|01t3S)Nў mN{%"8$%z=H鶹# t C'6 z! 42-d{I}0 l}tZ`pbaV_w%$Ȕ HB$tP$: ISʟG$ U(|„ y!|mih=^l=Ҿ5SkNJ tW[y ʻQ'#Qm0D!<νz2z@T"AGF58dTO ! >9.!\֋M1B.!TtvY/>^ y*S4n[Tc@]Q&V> ;FMS!r:\mOH1}zbСNV&L`D^]4L0d%:")Aa4Yifc "7}9!Dt V]h4gJru1jRL>,OgMp?lʥIV.V<=Wer3f˔?^ Y @;$yu*^xw>~I$K؄*DVmɸ#zz>*>o"9WWw;n 7ߧ:x9c0 nyvexʶRGrV' /WE7 4YR#d=ʌX'ALQD&8Ptx@Uyf }Ww5Upgt_p:jX1VwU/^p<{=_"RG1 cnDrJYD )A|H(X^SR> J\ "s)VG9jr.uWKs2OhT351{ I"wR`!drA+џR}\ׇ\+0|*ZSuFsO11>xsץ7w]zsץ7wzsd:Fe-id?R@I'?UV "cPfTYrsUa jZT$;ЫѺWnW*}ZAZ~4&:z(:=knT0&XՖ|d ͚vѧgad7 D%|xxr79/CN ?|ɔ> wP=V@'qd$(G=NDNs(P@<3FhV`e's;l@$[RO9Λd9dbsZB-su0v"Zr J(VBdHC`-k&W.\cQrn"W-ZFeL[nug-%p#8ڄCQIegeN}.K>}ZAY*\~zWke`_Si?~~xk\Ko7ߟ9,W\Z1_:538~ͪ:lTLz3-WxU&[,X7bt-_%uY’N=}7D?{s#wZwq,@sBI @=Q1@_v]eץ_v]e׮_ LLpM)TKnxo$Ta.X 8R;P;PTNeS<+D1i fec  kBb*4s"dl-D/@ƱZ)c'/~™IV^+6ӥΙ#H9gYwVMC\9qDbl;e0^E9nrMWݫK'vla&58}?~mTs给z,*a fRIq&kYn7w޾U*hNB ceHȁN^ ;bB,+]*_C#)dq)UDNTu˩ck@\Uَh1/Tנ[CmkRfNJĺ| i%:' ssey;jbn>CbinW+⼊,z3>:Áۜ|ǿa y'uqlE3+ݛ- j2 0#k7֠QKkn&0M3Ԑw r~o֏fAB$דXOǤqXs~Rw;d |4N%oIWLOZH$8+Ke5̨0%5YGsw;;ͽ]W~߲{dD`0Z= Gˇ<%=xD7}Myվn#%m(>b6d6?e6Hl<^GcRB:IjzYaC`ߧY imMU6ORc wUnܔPCcªf^(lOPvr-_;9 un?vlUJ\N* )OYp\ߚUնr)Jwsn : GP* Í Z8Nv ߇O~Jٹxl4Ѻ/ (2j!Xi&wv/r?7%s!G27*Jk*VH1փ*. b/̏ >Hԧӳ J_fuֳBJQVچLN I܂@)tJ7/}3_Z_Ӭ/T ="))%\gyngV TՕd(jvN(򒰤PpXe.ScY۝hc-9Y[=g* * ؍hJȟHZߛC)W0ǘ%5 T(Hs;6ь3mM)m;ĺ:.usb\ )t['F*e% A!DN!@Š0_O5(L_`^ځ;zqs5iXgdL=`3P)aN`sjX[+s b0L13\ P"/\+OGMl1kKݕZ|f/F3sFԃH9s(Eٿr8BFv,W*Ez|/WLO-SXBPCi3!5@9P9$&hBkq9ՈQ.ˠ┋ m\Jr:ni6"&1L 8~&*# zSrWy"E[A!b$]cVclRQ*1X=(jX̓ :V1,'2 ~Y_Y-܈;~I8Ns"ֺq_%@x`I Ys6A֪uІAIwun k_~k;BFYd; U9\x|:84NB 5RRMeY6_Abh]~X >D\ _<p1L#ชCe1O%IDUILq(㕕b-JG /Tר1:G#bM=ѓS/E ֈ2@KbgϮOVĂ>:1ab^c);)0iW"b.oLIҬ?{O6_aCɊt߇*N2ٸƓM 6/u5@J%lƬrw{il_z&Fއ_<{ "YQ̆}&;c4|#~>fݮߞ]Ќz`D(Wo_`]|rڳ)0S ̍owK1PpWзp| ~n Z4 =‡ !Z&F1RZA#'pʹN2A@%Q9nj,,՛ ׅd}9`mOEdgC`]nƳk²0y?AV2؅k| ^:%=!zƩGp*{$Vޜ5h] z'v D4_ lյ7iCC#&u!7|< y䮎j]H{ێޡ}_6u3lDTT{ad#z?WnZ85{.smEXyg߼Eq//7[I5.Ku@c^jni=+r2tFjd)z;U GSo<H3bϽ,Gc+O!wjt‘@б-}~֧3*b$ K?z{g}X#O=FfUEi5âk`N0V%`M O)Nh/LR)Uc ZJCm*͈W)RgO|覝LRn{I?fҐpbf `)N.IճN$SmA1R9-C-tTn$b wb 'l á֪:߮aH9`)^|P@XhS<"TDE)ύ` hKQ)T""'~ڀ${=GI5J~P7ðJeh!,TBg62|Eis^;YNFp!{heOeŅw< g%ygH RυI?'?dD>O:\2cU:F[)=U~`>#!߹6撜vAщ}GvHx**nшnmHw.Q2$ڍJ-uD'M]khDj6$;s]lj7AyDq4ǽYdz|T#F֏L%;(kisSi"AҠľv;aZRƏwFvkCBsxs nUAF;NE;h[ 2%Aʍ9ޖYD#oz=Ը.nK jg* eރ;WfQᾁBTTs ޽ Nӷ{;FN0)HXo6!%( 5?}חss  ]gpPaɤ>Ń 5WBHp^RI^eGf84૵ڝ5X^XlTZͳC5E >ȩ:T,Wjf$K#[뻺LNs7}?ZTQ@;? [ӻFY[ED e1B݄RKs!#"]UP sn U9cP8*J]OiAKW`Gfx|'s?cNUWA+ \Ѩ<qy@nj^<#ÅB &Zߔ՘R#~F*G6drQ9aTIL1 }t!悘*2З/GC_>AIɚL/ x 6 I)/Q҂t@*-@RoNrwq3 !h!F_|io=oX&ow5 > y0}8ٹ KNbRgZw*IU8t*]wa؏{kGE Ul3C dT-HEVqYpi%r3r&z@ t%Nfֲ_RDcV )dʧ4EF l!d {>q`ߵCvk>%XGoNlK}ڢ-v 'E7X9Soeb%^H'S D#r /15bYFmԇ vԄ&]HLZyvNcyRc g宯 cYh4?< }@tKx֊ZEz`4b`Y3bO9H D0*M9;ap 9!,fI 3cͺ҄ah5yr fI@bXA3LFx؇ pb}s5ɇ|'N>p3΄&f,I2RW`BDХ)(5`őS 3O|\HK82TR>F{s\+s07 $H8#P&EO|s$$fR ^XL0${R΀MNǑgijKh"fq-A$uQ"SNeꬊ#_5=݄v0 {ŚC\߅Mn_WLN]Eh+_L}^S1&.pjBBwϏ8|DM|B1qL+k"va p{a]{ü7zӻ<4,D@`ȳLv#t$}ERӖ8iī6s"D_:d^~Cp骗B2eZv&¸Ƚ^nؒ&^>JX_݇H- IZSB>˒s_M Ob%cd!J2D8b\9:yz_sbDÛ_̬3-:}kvG߫2!5.A&ƾ wqu Soؑ_ "y:?P jQS5@HH=@l"">w#QoxϪJȊG<TWνk b,Z?Mzy?3}7ܦ7}97cH)Ga*mϾ.%Ja -r=Xh% ݧt|{8E'PzP~YVn'xL?N \YPSNQE8({9#5A8K;^h?_4gK$cx?.Ub%"u ?4YcΎyN#gUOlպ w>EcyU$ZpuՉڈ|>9:y\(Ub`)y8JBj;k{!Ϊ ?/|xupeqP?Tm!8MKÈn*Qa& V]9Ű`=G!` , SIP@he]pT+eE/G]2F ɱI)gkӚu  ;ejU~nQ*TK:O̘;774y8&pҏgpÆ'E;S.cL6[ .U췛ۜ}x~8JIN+pfUtj 8Щ\ _̷*7 7z/`@U8t$%+[M,=ж~Lӥf 4Qԭ׾MVݻ4APFy&PN=>FD'?M+ X\)¿ltw3l`T?Y`$|]|>ٖSX l&f.̌uMi}4$(v N+[IEvuQRyn?#ToB)zTy[ Y7*_ Gݝ. UaU|ݍvTCpp}~}dQ[j!uef<OiGU/{6ӟG3`>BVq6"!|8F*9;d;j_Z'#BtS8@i!:V>ÁWB.JhHPG:( #eqҚ-JmvAz̍g#c8֬\4b$̵֧m7Jr;箲ХvWlֿm-Ë1EHvRrG #THv `Y,CBqoWP}ŏ(YJh2Axˆ#R'L|-qL i c˴s,X sYhb >q2"$g"(U᷹FºSْh_ GubRoE/.%w7+ͻ=LαHڰ=G@RKu  g?|ͯ|#0RDI@&UB|V[ތ޻pwmHm܇a*NMs&쇝ds+:8[ ) ( Uj*4Ѝh4}Hty{1R\ra;]գ?2csf dqjYo2xE@BY$7g^j&8e.W= OZ4;JQ\*\uΞN׊ *)dJ \4{}r^p(DK&\r?^_^(&n|2T ꐺV~}*a=z~M|q$Q~jgS0YGڦK)gm]_bwN>MD>jqj0#p4+D7U%ŒBwk%Z-i0J yWWJqSuWcrd/aV *Dy ݱԠB AD#$ς~O7Cz!;$s͍CBe!ibl1,)0f@ mk^kTٶhtguf)1@̃k{F~kÌx ,:Qj/53)O8T:"|5Mfnc=Ee!2of.4VW:Igp*ᖙke,:^9XqP(" C3v~ȲUn!9'MY^=+/!X*&ܸn*=_^VjUzNƻC'B8 VKog߄yd &X-9 5RD ^'kP,tᐲ`:"'y>E"i0!.rD8UI0~4"b7k$R=o߉)(VXs0^>ԡSo pw>'1[ LP7fϨz4ik?~&\qhBmJ'+ w}=vnmmzZE>5uGoh9<_gkAz1z_0y$.bH}L]i9FmS㣳'DJգPD0d~ԓTD^/Er5i4(8sj&nl~{iz?q~/jMvٞXWz/a]ڱ|3H +:pDaXw)|\t~QzYm͚ ֕`sCCRV7-C y:S`w,PE@6 SA Ʀ@E׈I9ȼ]xg67at= ر'Tө#O"+҈T"V*6!N&}>ZP)n^WAO< *<7j6JM-beI6|&hYtZkP)Aoz&ZQNEJ39;ݧM'GLnE|4 G}r6F8&a4 Xr{GнZJch0 s ի#7Hknxn&i*|y ޟGva~+s"_A8DDA00 PT؋`WoCGo 5Uz5 t')[Kͼx5UӖl"vqHC믿Ү^+|Pc@:FƱqyχ`z{TR }Z8XŻZ| tZE8u˫q-d[9>ZkZuf;nիAees׫sb̉,RvWЍzƾ%ƅS׫Q] 8)[Fu{iI]|rvkr&X3n_'Q'q*AC!^U/;C A%ْ(5%m J f@u _ڬsYuzgm)R8 H[_gOVnm<>:!t!}򷉝o=*gWV0S,WQy?.+//cʦSRrdeu>5 ٶL" ݲbaE#;?'9uOXut7iJfŖz*J# 8DPpFk{\*3Tg1H}j8JT.gƢQ)$ٴ&UkO(g3+k.MլiDT-ƻ9~ $4_Z3`ߠ7LD4({^ksP$$kkeatÛz056]Mɼ]_]lE&TW0eb TCRP)1HBS^S=$<"p"º9~@q}m*=yYñDšy/a10I #T)PL$ZW'vl@ -.jې[ ND^fk o@qٺQyi̥i.oƄuPLnLk!a0!1r9HX{*KIF)zs.OjF NܨNm%n$aQZ#&G2`Fi%tin,ƹ^nB gg[\7Bk۔{J"O/lM wM֠ Hw⮱oWOΥ:J"%eڛ+R |}Q{|ȜyGvϏMȑ6}9/%!qm,@}LIB7;huׇ_բ('ٖמT[58:In+umu*',9 -J=n1Б?f9k<ҀIug<*GSާ}husN q !YZ{1ƱסdNirʂQ  udG5X:"S!ߒG6)F)F'"w{ub pLXch[W' 9pһ>Ө4Œ#B]кN>Am2N^%F۷i҈tWNFqw1aj.ų~/} 3TԠWO f(bE䇨z:S31(KTg@)tDS;4l0pa *9ZLԋWL>}/3!.@}3T\ nuT,)w늝p4U)gySpC-[e g˧pr^]awP^n%6ڲ#f8 @Un9fl8V}Mgo:v 1?zY3vͲ~+y,~4J##,ȁB'CUN&*Wz$t~JBw7F*%t6zO{jz8[{|IΆSs8Mk>&}nclES^ԎO]їayvĤ_=As$Dt1|DpFw}:_`]'o4FP!io|"yvNf˓Lw B;Y΢ӈ"\R?$Pz@?Bd(6D͌Ʒ>v=nOs;QN7p8j<NA:Ufޙ>}dc=Y vErЛٿ&j;<~'UY3m@?ҲپlJ (rcH2$,( 9q$fHc7?NR8ctM-WyvFaDeK,3Kh~XC-kɲ =diU H!h@bTLI}z4eucG]P a;Y$GMR .B/f77pxiǓʦDB4.9M]A*N3}~_3}'k[L + R OےUo}xgS85|&) [FSR ˥)K`Z3)lrcǔȬ ;[\z=~$7%`) Zz}QMۥ3h>>d??KJ( X"`@  0E"Bl>0 *=X` 4GE`hF(f]:x%}+!W$fCFu(Jy[c1:/- BmvVAZCE5T9oL`CyCrXUn^q VfLp"1{4yUeY/!dS/@mfm^( [u:ɮ0 k~A4L4;_G̾33;3]4OQ*"@Bޣ!@p87\=ݲ X%=*`O +mvucsk6>q!hl6[e+ك,&;?r bDVYB+D0IzLٻ6r%W, $; rvk|YIL8}%nYn[W6]U_X,\yEA 4b*:l[e#刀 ]g^,_o$yy"W`FG} sglil9 &]nMg`, -q|4C#r[#/A DQ !#Vi* i4׿Cv.FY a[K-GƌK;>zpR)(T 3R&0$V8٫pH~pNaoqj`Ee}21vtk!?,0uBcXv7mr= ?esÜBھ3SPҸU~W`Mձ ,aHI/&'ж݀./3i s/yhLxr? ? &_FI.>37#?6ݍGyH|"Ax6V2C)$ "m=/);D4Ow 1RYF+b*J@+x~'9' !ib! E=p(.v^bJSQTNhZ,j `P B5nw788L:8<;yrtgFc!Qf3Eb]O(#mUU(Yyo1IU=lmKxCـעYhMoWƜt_VF(U;BVxqax:$uZ{|m /{0R2d2YŸ> []&mn{čSzNDXYadѯ=vech=+ŋ'Ńd%p{%BƚeğǡQ=mNӭ(rSt$* ) }e6̥@Z3Q3(Ti+h4)+}OA\p8OZYe0D'#Fawp rljcKtZA!\X˂Hj"_=%hD93R.%I%):R9!i:Dub,q[C11dh [n(a=hp`x]dC:Q+.k\˳:eDXea6\p_m*~Uq߮@ɗ al =u,[O 3/IM'RRߟ5,;ނ_:@ADTWM+?~}{6lRo>]oq!fR?s770&pm54I *Gח?Pα&J1ٲhՇg>q3`m3e‰&י9+vJjVJSSoOaAkZӵ~ Iӫ [*'OՊ^:Sbx1]֞y63+Bk[4x:Y+eb AZ z\>tWW[PS;"%8'3R-sPWXϝB%|^R]R{R[C?sD2(տ)+d%5VU VZa3u@ՄxBaO$r$!8摎<浺s'2*baB:rj&5i)3Zj[miL;OSRy* U#ݲHmbUR80;D9-,xbygNs` 6ITԏMl~TL} W,<3.|ıbJ'QYe*}nBR)"&qW%qPJE#(E%0⤜`E9*bR^3-!\k~C(׹Έ1Gr%xI'€Ie(uHX#8O t `7PgU:nsY@+-ok0`&S#NUE.t9ftiؚG+F],o 9Uz HL )z 25U0]1pk$7UCrpd;zAə(CXi 3JnAvh=a)A=bu`='Kl'c'R$)rR$ C4Hiqu>P޶u}q%׮]J˹W E3F-ϖQ#A%CQ1Өlh”ѨgP Ô}TD!}rEIkr](B2] ]?"=٥H>5ʖfT]UKwW]b`b8!'DZ} t4vLH?Nés_7[I֪3'iMTMn?շ0#+q?`R>݈ ]M̵Z -GHylQLw1]uug§.iU¤>΄L-AdwO5=8(k`w?vn}sBn~:K̭wٸ9~p2Pcܝ 8䞥lg)k is|ZD&ڔ5['  x0ϗO>|i}7E;q"z0u-"O2;qDRy|$8F͖tzQumhF%o-3xбW|m$<أQmJ;M_ cIp"vtWh% >Jc )H .G>經q *pvs$#=N!ըqv8pm\#JIIڙ)TȼpVl Wp{am`{1^g/BhUU GU8ntΠބ0E&0sQ<8ec)>)jH#W'I~;Z &dSRxy3"O)r 6S׬f[cs%iScmo5쥗$orn`_zݟYfOvø1|yǯ<<t? 0<塯K|h+J# ]c`} D0uCѪBBiE+%e:m$Fr֭hl`"}AE0/ߨJ?f3.K]׷_#ե e!/fY[wT1jr I^;ZT0(Q1hAYm4J(0P#Msnǔ>0N(8y"9,B@=q #0JNͲe ~jÉd}lI#vJ;wpTG#QX4(X$a2= oRce]ySǑ{nv|h4#\\f&^Wdi-LP#H*~2QI Ip<ՌP-TH86K ]d``_86t@vձ><<,(E?O޿{Ycl񈷠~_  "*f#۳QOgz~ј ~7g&f6U)ܮ F[ ,ZsTze-K~H7 $\exTӪww+9|uNk=ʵȥX-0BB7e\/f;VspM1Qv86y ,"l Li[(lL9YHd#E? ;kG`}XbBW;WX\6FYȨ0H$A c:bLC\J<"%R8b 3FuvTsʮ^`)+ǎDFAe[H0r60TcTGSX!mDd!8 :xq%ay~l5.#ٽǺS .]WYgNRv+{L5F*JNG&rkø*RPD֍jarH$tLG`jR+mrĶ<0\"/1K6}LJ |J0f>fĶoLN+fѱa27yx3?YkVE?X_>gjqȞF #\]e^&`/a>2X؄D6nWeHF2!-p\mv?scE$"YI㔡$-{֖Da ,lB8Hz@iJ&ד3C'EgF ϔſ3E|ږGN& ]!FC"sNo9!+2g(nxvAB4rDR,!L) d$$J Ti~H/ L5 &*-<`<0N,:Z׽/͡'uz6p"b5(}|ceG)*BB2d%Yf8 nU ۺJ S(3ѭT۩>8FmO9,vdOl0[#L[}bz;8_hSYz J2*`2)w OU;Phu4fQ\kC\DRJ7I$O7u?O Z6⩃9΀>Q~Π_cyT9m߃}jxC/Nv= ߚ WZ,|H0?m#Y^N6~Qrl֩8=\S/J\[RH" %qlK3__Myأ<fjJRᨑZP=S%H%}~;IdԐ&cL&3Miql.pXrXg}БZd _ Ax8X_;"N;^69= m 8ٓuTDZ-Tr{oM+qyM((h~<~P + >'b Ut9]) B~BsWq*pgbe?UHNgh3u^d1G0/3Y8T /b*)rb*)rQpuf8XfG(06 AFqJRͥ6! w4h 8GZ/9зX+&~-sEg_Fbioۀks0 /c4</23bdH+ JXYN^$zEREZdee#F:Q:H:N dD:2v8)f/jm_&+!R)G.[hJmWgs~λ87`k1}ќ#Cid2ӣ9Cd]p@(;e)kP~k`1CuA[)l!Džv[mV#:딑$zUh 9BY`kaV[‚XpD9zo=&:#Ղn$/ -Ylkm(j(b=uWWwLcտgXk%}'M!_M.-9䙖.AlppxǕXHᴗ[ )D| u&BeJ A:c @-m\-UuKJ-_@FF-BUF_4>n%g/V֔n敋h.}3/fΦ3~%pD쩬Xc*t"҆vh'ow?z=-Zbr*($3+ްb+{u#9/;&% (gR +_BCjQ]Y, ' Bn G `?/&*-l$ JP{p},#?۲1sS0َ՞ٮD+1] bGxRsةX pErk3+On3VՄ(H^$䡶wmɘm2xDEü+0h, ,O jXCYY'a(Ai[x>4b t {B'lĠkzR S JXWmVJRgN:a[a-IpF6#pV0j9 b01,J7 Y 3TdM3F.)F2raI@*U*OI_zxی{OkQ+1, pE&uN$ 'Q#lKhH`7&')#UKt9Xn*XEƓ) *hx M 3pw9|Z(M8pp Jsc&lIKUxRLyrs0]\h )QX|sXk =/){~f.*2z ΦfӬ~Y[r ؁ͬaJe 7cɭ[#l= okTk7a`^Xfy 'a \?T9a\)B` )4`5ϢRBQMrO.9f*gMa \Jɝ^2&\]O2yCĀWxR\ړwK;8 %fw8E3/ap,' 'S?|eNx0w7'$?uM]$`d1y 9Ob]>ˋyƊAR(1c|JtBT)3R.NZ4ոsO%'#&s-@eNA6H'utsXDad> ZD~HrjUJR;L9uކ7&`̥ zLTx(s['?"Ѓr5xM}?c4 ;f1VN&) wDZ6ˆr tFDK$}К9\ dZpN3!. D !R'DzHƈUI=P9 I |*WBLD`;fp[jn LL;ob .XgXN$(`Ʉc^E Bs)4uN75)i^n{CGP؛AH9MlMkKۓL]r q~@ߓ=0ץ=>Mvu >KFy׀t_?_7f9 v4ܺLp464iCXv3^Gǿ]j61[63KhM³dIo ;@`ͳgJЀoT 4Rhe78 lٴ|qIr xqM+sS .Fq>uׇ>dƽkc+qȐyzuX|w_ yʗ%)_cR<0]+' z7d9i NjdL*'m3<نz(ViWxsLBWOج$^'NhyJ(Zvhzg![ z4pt?,RQpVѽdn ),<MayADS'0u`Sl)`QT̽"QkmPg sCGGp78{>nGr@Ά\>)ˆyIdb88$(r@z:u(pF(7DcϽʲif+_AmfyA$FJFTV8?89q>zB :y7&-\Oo>Nzp|))y~rzk=]4aQKݞ޽n ʘN SBQf)ZC)-XRnK4r=Z01D!ѱM:LBiMLml3y#ƑclAR؜i9-dŴ6xQL;S/ |d d* GV/n$&:ei/D1$boD5sNfSv7K LI}ʻ~7rrqsLj? zHY ȐAX!A_7G|58-!F >a^1Y.ܕxiZ&cgz=0_ >let,?[l .)ª8U{aѧQa=**GԌbwQs tS^'rJuPׅ9AM( /4`BRsguIɒ!&Rn.uM'#LG&s{gӊ"d2BCVDU'|M!PDsf֏j@5Z(~nyd:++:F.t@! XG$ x+ aƺxHs;)?]mo9+d%9.] vv_:R-_dmY/bٲ:Lnkj˒ կFeU ˏ}i2cxypY̗zQ"uₓ+& 6?ήed hON~g_czԅw! (]yƳ WXVbhGi*ƞQ㖁Tī}k$4)21j{삓~_S+t=vTQ#,t=~!?Ljvuv1W{ƿ~YfT]^_>.'~wfO.ۘaiDzܰ+boCAAJ|X01 &|$J;L)xz?@<(6~ǚ5bzHkbU,uԮ}I 22/"F;[g8rb2Ȱ:E#"F`OhJWa8=]aծԑCHj.!=UMISy8ʕ0hrFp(j6@ a|lHÒł2XikȓIKPy`ӅZSN'k,oߗS71Z8G〄bā:ԉ& d0u m_5 wD9eB)m B譠r9-GS )+b^r~.q nb5FBR:FVBdH"4Ĉ X3v/<*hFNh9*WQ#9Kdp -wXZ 7:#! j\ܛy5|yQyYyvʼ 可Q`^+xe [(QR5Juo?,R`#PG{/b9vjm|6Md ֧MZ7{u+)ZqObRif?:hXy%^3cl_6=h% NF y!f2p7ρq!Hq Vec F41Le5E3ǧvqĒ4ZFumprS=)kaՂwl$Z4)AKB)7USx[0=ODB )o'I<"jeOuq3Qy2Poyo%DJHÿj8{ jD ڃ$6j#xM@Pp"ʘcBG-A<[ Òಈc۶[w$%\oN-)Ԓ۝S0$&qE]\MS` e_Fw{ssɗ3Ӎ|ji0Dfßre?ι0ɛJ!}!),D.Ci\~tbu`RS&_lf2SuWoxAײuO5zkX]!CqWmąw98  B$nmomÇM "whKE 1=}A,hyh!1f[L֍dÝ;ue&/-Qߚ>6h)-{>qӽӹ]n(Bn>_ݸ]?Sdv-^h{eƷPwx.4bU(bDCIMl4*vD6?&.zq3$pшW58N82yD#6v$8INb@W ١sͪFYu(Գ}o]z}Uz}Β0F c?E~=ϘNk4Mj$\d |͹ڸwl}]61Ո8edBJg_SI1RѳQY'NJ;͍x}~Zk\I!H6mڳLr`"{iζ Y$y=z$lDɶWutk_S$bZ]vg/$'GNB0`WSC~xq+&\]TC 'C:ug>48eznGѲBu7PObGDƲB@GBe2 ESxeG[bɶqg#E K먵69;9ZZ s-ڧQ3`qTsDjpp FWt(;uj[n"؜lɖ묀tzw9( T)+ PPѝe.U%+@<}Z)#KbuOًiMMs͋1j\A5~ o;PLs` B;>Ƥ0~t"rNK 9c kCKGՑHM3z6cp#ԡ G0F ^K}1b]q\jc!0|B[6d.;7&J['vψιmkgT8'u9:?ю5سm/ ˼9&lJE "]1 2W˜:vLֺ~hu&gfA}k;I;ckk;zL.|! 8vC.Eu4Ynt>/Uq?J cK+( <ng7K7Jt^ٽ7ME JkP5tz_tc׵PĐ*|Lﶊ_TIe=Zoh(MT֢►c5)i_w՛>W8,gO~hX'J{(&yX{8Qr-w1e\"1FY2 }y~.?$d18BX@h, .*+11(jgQ.cUBq3jýgJ LaONsm%vXqE´tK0 H9]A(Xb `#OA.FCYp0WQY"dZS hq{9H(F3A&`2@A\gS8Ͼg=Rc{Nf @i!BzHkH"Xp F8* WL&,P$2SM,VSf"Ƅ3 'DZ޼* .ͫ3V7z7rkzܠP/\w9EKŸ.?3_ k`N6vR; yqtW?&w9CÑ ˥ؤQ4b L`,rTy Y!A AC]G!)Zu#1uZ7o:xK.e3KnSw&@J)`%G#VAG%k}n G_~xsΦMiz?\w< JdIٝ{@6m5E0ıD y;auyba9¹9"ZE͝2M-E+OCW' tYX40Qub66 ."[t~>0=qak-em9T3j D㩹Zq_N~zviRW)ߑbDQ{-CQiNP(!ǥs = bV&^<; ?K7h|<54[R?F ^Y׳g\3)SrpsV9W,`9H=S r9@&s3pf4CarRev!@qc:L. ս%؍K|9-1KfOE2rуi8t::ךjl%F-s%:6w֞j%iWkU::_՞juu1:ԞjXjیv]Z*D |M bz݌ѾS Ѡ CZ(y>[s rt ,<8]R~<ܻĕ/ĎQQ&tM|9/vdhSwK:(5i ו.ع_D9ԯ y*S|但n,a˃&;&趽lJi-5!u!\Ect ut|'-MT'vMm{5 -!u!\EtJҝخ?EtAՉct{Bi"Mun ѭ y*S|~4nX@XiJo?OtvO00rG@b/ImC5Nt^/+߾2k>9W {ǿ<,$>X,g;*29YN1g+ǩCH(gǰPb~2WfL1ho=,.Ï‹C`#p뚍1Y#?8#4\*)+ʹRZl]PacCr.='3/=/-_]s"*`3 ,&9/ 3 .M/fcxdXxwa2QT?0I ?of'#%e鿐@%wO.;^s[j$ٚvV f^~˱7!/%bvQ;l`4ѥ\Y'˹1)+չ fb WCWDon?yq[׉_ c/Ib:7CxtDŽڎLت#%zǖ;Pw'<9C**4kvA~+@'w|:\IOVޖXfǢR?)~ vl- D6)ڽ@\ و*OIQ&C|MӴcpB+W {2slt?>⃗I>:̒^&,x:a)h9E;07-_%´ o-yME~SIcΨ@ߛs(41|EQnt'|qz@UA5ˬcNp*TL mF2Gm2n%SeD<#Rd΢ Za-v܃iEfCTΰDa* OeAq{Bhmr 5q^*I4e&lWtSE$f N8;@G?{{bxڈ4+͔˵b;' sL6By-eX/0E†vB+GVpLF\rcX|gT t=g5j{C3 =EnY4NS@gyc^+ɫ_O+}5~ >?3j9y i6w a~8!Xs`6>0h_`e6rJɛ{{!NVƤ46#z70 6J/MӴpmJQC8[3&a hHZ ֘ds9sNZ;Xd |t e{}2$CHh:pÃ< z=Y18r~y6l rX/H"VP KRpu+_-danE!HQ9 rM氎%~[ '\4䙫hNQwDT -[,Nt]U/nmn]h3W;:?`qm2#y)3ca.Ì i'0aͨ Vqk PJDxp,N^0) NpG-Yg`xj#ṕ12s'ff )ck->5c!FD wo{zOF4xGK̓֯ڟyܞ%3KH)عq)F"jrƵ$tCI)iR{QjyrA-h,ʕ#","3Lq*"B&~`Wg AE2b?Vz`6s`J9`h ֆW3&p 5rRRav9,01V$ R_g!UjjtVDO뫯9R%i K?!'WjT_Q:֯UAD+&ݯ/ 4,˓.N? Bgf:g+A=VF` |:< YX%D p^n7l\N Dܽvn,vVw0!95,|YKc/@ʹHI/7)J&d; X[O^9ɫ`^]XЖOl8\^,y|V oƋ' ;Kn?Q&'.yJHyM&s^V')wȕhpŀ{6˲͍E*c4oQJlުk+bxQ S&J܏qI5ϝ(bJ>>H@<&f/MpW~o3ˋ9sBB->WR),S Qvt-q(l*u~tq'* o!Cr8aȬ~LvaGPm ֲDAr-|$! wʊ&|8xh{9G(&X;'sx;Fl"h*7F.,T_ wJQ%tK q*rgdD)-6XY'8*M=HkƵ[ȍ%Պ0iAfP߀fYug le TYçB=`y@6fi&k#N?{9n_=d,̗E|HN۳;~E4eIݶWeU^,V@Hha׶cd3AΛYt[U(*` g)8 ' ( }9F$Vɠ}*s#Q(e`d h }jw1EPV(L8Zz 9 Vb9&qn2%D> BH^%=g07z6h gzu}$݈Uj! lWLD3Źe]Djso %^Ynٚo1T$MYn]Ge#}T='DYٙ0uK3 1?Sj'KU` {cKϞ9f: $*95^5LUe3p5gd6 oÓ/U\,*\Fed1mlJAn>->ůgw1Eۚ<ۣ4\W_\ғŝ,ldUaи"% ySly*r^E#p0Q_ɱqo?~y. + Bj)YHN<#0 YV(BDT%W5gA{I0SJy8ӹi11klyH*6h?JmP)2Y,7 =__FsWL @l|3q jX#auwGz;=FdOU%֠G6/\{j9j6Gr~nt[?}v9}; x)g)lzv#D eWbR!0";znrtk}\yss+MЪy'?iy :? 9PMI`^Ocn%]hd2eQ*]Vr0>ڵAa 'XBLYa|r]:qelƜW4_dUa^{cAj>:dA-ՙȼ?GFg:?O}>9N1uQ% &t+,+cv༜fq^;=ގM٠"3}, nƐ[O ]tYtvRs7$ 2=Ǝ[b/a ț}طC_5FwnQ|qdʛ >oc.083:;ҽuc2δ1ӋKC?h6TL` N,#"hK-cK?o:2vFj$`9ʏ9in Wԍx.C>G95٧e^G@i=1z=!{ʅlԾ4lVt8,ȚTn}٣ᪿy3!v:9S| i/ jטW*Ҥ.Ԟ?^cQ͈4n )p )pڌ'gƖɦ`M`UA(p%)bR>Pt>3ww ãǶ\6LtKpZ5"t~R-=^3xg>G!E!*( )AJL-)dUbjj2^Q5 &iBL.95wq7(aNcu7է!my+'=\y+)rmzk'+=guN}f^7$3Qrȥ {) _c-)$j1TۨYSÚ1\*p̸QgFs>Dr Ve <J)ia_Qc$&9-C?zЬFu' yw FܒhHFj>, Mgqvi-$Ouġ͔7Jӊ J њJ 6ꟿ|[N\ۖsE/o[$վm3r8Je$se!N z`!iR%WwTEdg(#@) )c|*Q*+6&G"K$ ^fdz6Px Y>b2&4=MnwacUąpwL2(*mR;g#%PXfgw}Q/6eUKb5ZdϜ20kSaS[g >L0/F<ӤN;Mﴩ +ї92,xKAֻ47J],_Mt|m|݇+u8 +jy8@_B2~yYE7Qy^aI%o+ ,x(cid]dV BcuD ?8JF ں.%'AIՓ{crDú(`W hg u4ɿ_O3: I'{m?}P=|ȏa ㇇CiRx m~{,kqLt=-~NIW?NAwߙ_\]ܞuV D E 9Vo 9jt>{GT\дV2&Z6( Kۏ,2~w|TE bljȦX|u}\OA>JNOً @!o&8)"LS*wpԠ4,-D\C_I/kR x3AMUI@\Y QjD>DZ͜"y\tمݐzF,t#3L h'yW#WXzMn% O m9dܻN䷠p_fŏN iC˃ML(cfcBK5`w2BVcƈi>-Lc҂XN׽5m$JrdD Xΐ!AjOnXt=C^V$fc5Vys$i90åkJ;zI7R Io9hoF+QxCHܡ[4,^oD7RҾ`CKC6L)"p_VAlEhU ג_=2TfMJ4rx'ěU5i:ڸUR2AޠDLbc@Ts}M cٛ(J#IRkJN4YOhR'l7I InL$Ofގ#X1TYI<@JѐD5B8om*d$W ୏fGAuy;qjIl:-AvZ%HrbNmrST)H⇼_&Y.X@- b$e[*cH;*C[b|vӘV|˼'3\%p<4Ɓj1sEЖc3.O:n:?1EQ nз>ns]`nGY qs+U6S,HV d28!C4ʭ. 4/G-|t$| .ކRT)\\e1_`J<ȴd\'++v#n1l4=rnSj}Ijd7;O'O_MUNSusvs sdx!1򢴮L1$&4ʺT&(r;q=3<20{BT>K$q6.b(ɦ׳l-!dWCgd-k/ih̥pe'glqC$E ͢*$ 3:(Sc Ŗw]zcFW:ghwFTӬ+wP3/[͡w_#P{>QyW/ LXi٫q ŀn`QqKIH=SqQz vs9}1#ӿ`r iFzqO Kv\ԁblF8\dԈ[p{:ӪWP)$0&BbK`iGײr5+\@@QL`1(BrNjP=WeuD0sU8h4`w z]NsN# f![QuX%^)Xm+]vs(Y 8Nh 4ĴySė. 0F? ҃Aݥjj.ܰE88m95l #{yscn,C.Q &u}jsdo9L|7Zt=֙鼇ju09.|x` ynzGOv#/˒g+W;W)Kfyn{'BE Ⓩ|GNƻ@Tv&XVm~zd|)D\R pVxIӈ`V JU*mwmmL%g/CTmq*^o^Ś0]Z_R1pHbfHI̺֖D;dž:b%LA]،|_fiΑnGs5|vX~V\$إOq]?⍷k<4j,OHǸ1Hga*L,Mo-I8TIİPo%@=::CU6jWéVitTSi8H` Г)O n2l?'^g*ُ^75vͼ1xS*P#() {7-tvB{YR6^B_Nٶd\?芲HQ-%]Y+֒pɔ )9D`uf7(%hZٟ__GTJD:msP"|R27gFe\oZ~+7.:aOƟ{w^OfU=ԥg/^.^es&i*z/͑TL橱 IS)U?vh@˒M4uX"L+61߇yYwkqa/+e3Qx+WN+DW*\5;k3ϠkGa3ݲM1(MGP?i<Ѳ$S0c4K +p4 7MmI 鋱31mvX#n?lD>j ,#Q7p}Xy1֩&SbIĸGnH"QqA1nCB^ȔFlh7J=AD:Eo-gIvkCB^F'oqnj7] Qb":s4n"7f.P!!/\D)Iq:ֆu>m(5f-xg0ؔ)B^SwN~L#_}b;v_8ԏ%/W?g4|p]ӽsq;-hQSuaCssN@;'it.m/~f PZO|}_C6;24=&r*%kma@zp+znׅ륚:zyCPbGhrk&: ,tvouӤ^/knI}x%P݄IZ]UGK[~wn'-6L;mh6[;C#Jأr1kXh o%ёљdJI-gEEˊ aGV$&'C94;x)-Wj4΋;V7嶺[i=4vuQ\&=aa%SY -(Лl0:_.AE@nr*|^ 拷%/Pc x[_!ӌFY a|7U>Yld#+F(ntn M^NnˍI)dA)lY5E2ElamaWM'(}v FxtoRD iqŇQ%la GıcloaAԨb.t\`NH*?)!eR6&aC%SovmL>ϒK%_x}`kJwN+f~Gr5'׳O.!}:/;)BOy%s#% 0PYBqIv`'>&ot"cΩ%|c>%>3i] h/?֙q7?.O&§Z|bAɁ' .H"{o~w_ o[o(/hI"d˻qpp8ﵵZ9붜N=޾anª|*.'cCC?"lA4M69OFRՀu@w ?mHeI-G{TV7FI_Ww\/]!x_kL?wk!Sn`W y8TIQK?aW:@5`FĮ-]\+.5v`oWQSZygcϚ"đ8hhX|cA~aE'vk&g"4-{W.oD,kݶzWƷf M󯯗0+_=>!}`ʰh6s,'h,Dp .,;UoBpl a|FtZYqK=aY9.xWHQ{%&fQš|wt⧶,-~,>1+Xw|*MÎ {wܹEY[[| bC Q?+>aRMQ&3X6&!U0!`8;|ܾRhZR9M> /0ׄ#t;۰dSy\崱j (tY} &YkC2ܝ'= ?3 bwLJ~a8:HVځ6>#Җ%PikK aAMVJ׼nM> ~WӁSkփ(ϋRwf *[|C^ZzzI Nij&͕0,fMq;\]-Kڃ[Ke8u#^74fʳ-O#VZrr{a@Bmnإ{)E_8`.LpTr~`p0\Mnew3rn37)HBuy4/ᾘnHv芹q* >e[ 8 ?|D}TkI=w<4F2L+P^j3=wP ReLdjbxʱTjFK.SIk}9S1ԺIˠ}2 һ<0/~2+ 8䗘1< i B|GSL2Q-}3Щ0!dL,Xmsk =+M:M^0Քv>5gю&/`5 e"'N5ӊ?gFb^Լ$F8mO: Pݩs,}Je΁&:m4!2i4k-mt*j嗋On+>йVO2kЗכʺ2ޢӺ0/, V4XjY6<=[JKT{F|?_jr~]XX#^Ȇ ~g:!~ʓvBR*~~]'U!Y>~IsYyYyYyYUe6)Q#$(We8IҌ 68e&jHp9n;QTۤd}/Zj%AlOX/nG}oG}TAux긣N9PMk,j̱42\c'k͈VQ2%O!I/+ .5GP/n(s8qbRCF>IGz]A="zhRzG똅< P?4ʤ.%d`Sy 2+f%!J** 3xR$v +FJ "R!.wD(OyEld. N)mN89OAجVF[g&͹q !)& Lu0m$?}#8Ek-c$ JyGea[7\PA ]s2[a?ŃOe;[ns`  c*IhnK>(mc1"gOOƂp3VbYHd &<G#Qei'aQƥ LId6HQɂkEb'?Xm5.șX݀( 3L ;Icnn:v(gO6 d9NaRpry lN0bYFc!}pWa0p`c`C`e45)A-r)I%. J4#}`չkPCq{g#TE^@ (gpn!r4j,øeLUxu/`\q#kPR =B5M'm2X` ,6 /YuZu03WMJl^r7 eFjU;.dlb.o]dЧ1zfe ~jtVԢA$͋':_~hF\' Y?Χ7?92=&^ :ڛW07֟;~Ebz|@q8}*} 4Y!dEbY@xBHSfqXYIaRxR^( e]+/7s|xAJ,)s1i䒪OEN|,X|+IKWbDd[W8B09a(ׄ՞=GiEr}2>7 8U~R5 ZS[|82B YRJ[ŠP")0уDDH¦j>VEtFbm} sLK8T~Y1D$D]1d@P({LmN%z@48KMBVVg'%T iur44.Zf5Q9MyAa >|])rHXjWICrćG<3H+kQy,O "Q)E mTj563t۔ mC2RmVhG/2 GU(=I2ܙdT0tj3P u<uF=T9mf>&4IHȐ4Վ$A{9BmRѱ9,N|F j/.4RTm(b:& Eɢ^܆t̐+2_Pg /@=e G&%B-9JNY];dz&vLڧE&~ϒ}_gX{m$=A5]|!Bv|bIg[=5U^ ų 7X~gK;WwU@G?P_~9'F'EI"^P)']7&66mL 0.EM2ւ@srXHDVARZ LIBXJ$gF2q-4Pu)6Q)2)H,-qF_#BW҉%>QXaU'!i=!%890ȝ-Ϧ`L\z+[x6Ɉ0( SDTdb&mg̼BT,}P"H\d(y!0Y1z; )[8c5qLGi7"2&gI. AG:i-!Ia\2 (̕?!nZX̦PAQI&5$ G/],ɩqv8Sɀ5N\`9.+Q^$<[Fg3JFd+5hplg3Cf3TCM-ZNPFdͲ55S 3Q{Hw"[m"tȄ@ P*LO6}" F'0[a O )Dp,iuk wOtUn>؈+ t;ֿE=JŤ\Mj ygik%I"9Fߙ88JfL#Pjʇ9i!t@߮|2џ \jDtUaI_%$aCj#z& [ty7 S&>dѯCjmѣJkؑ&ʼ&̵NE0-U%<_ O._V5w7;8CT`4 V1VExR%\ߛ lHjӂG4\01bwOv!ҙ"3E _DZ2zToVv:V\!UQm @07cޣڵ,rBrʘCf,5q>Zղ8UN[MiQ 3%|R!%Sa )/v+u~2uM;x{)"h'k#VGFmGqm(Rn njDxD4&nQ"m䛊u,nUW ށ|~|߳ÿ8 ;غ$f޸*TXP.3-`vvʁR6;a_YףZx;VXW6MVx.EWSۓBaRhjq#QEߊOrBU_wgfjLH-PMg}8$ Ԃ@Tw[hA aUђ_hM:>)N.  ֙tHIQAaizC-3B$ Ğ9ݭ^W Bݗ[#6(\Fn]S45,oV)0#"` sx$ j-r|AܱKz%Mt  $R',1%“A80l"Th^P~@1Ӟ~u:ח.gPQTQ*$ֻqD{ND,C1Q6FCZȚ}#y81=ݶ{CiFާ$ wi%Ala[Q/r8$T*0e+_Y9R{zmxt;kѨNĥօA,Pʈ4s^c5Z}~N JZxx{Ґ3t5V:{Kߕ=!KEngaqH(Ԙ9&MFu!nVvFwHiړOnW'sr?O})}AtXϧWh WF\Bj&f4){H}4LJlӖlX'(/P"'``S~вTa,.^"Gɂ շ'(#6҈X Rk 4L.z,!C#kIbL=3$+wK/ުY O{WsQ\Eq5\ ]ͣ<ttYY 9VJF/ Z,37O&C$G4mOUj3=TT idcX._VƠ[&w9; O3h);T#XWw)!ըIuJA: v (OZ!pēq$$-WoRߒWxԹϚb31AV{INqRW4[tXD3ax8чs)$#6fD҃ҕ`4Ǻ0Mq,t cƊ5ު+m(]V|pOH6bU,3hVQstU#J%gj72>7C [R8{k~X$4emoJ?+꯯c*-z)]TqBOC7きJ?:pT]. |^c>V_p ڣ;_(Q`̤!߹6FuOxZVԨN;XF,[;IuCCsmҩAwrpRxBʃi}vœ<–_ںUZ94;W:e(f\wFT'l~E-p!4 z,E&mܛfuNk r'=?jv$筁yq@,SXB @(I8 ݩD"!w Iy!5W2} Y&вdGrO/ytz7 i T#3( yDֶ;򎃈./V eƽfxr5bFñppno߯0HˢڱQF:uˀ 1")cq#XNʥP,d{W5; pd%%T V HGױ7O>qK*sVrn`p2:ˤH^F\@`M=U92{iZe - 9See9 0B- n!~{/Ne;aH+#.ޓțH/>Ө2ўT>r ½"ETdxET_Q},`0{iԠPI.kJCsuRAv(e9ZA[i49ASq7B *}:fg}!&1Y<Ԯ,rw@̌QIJ*iVGP%Fɲ0Dµ 2ì/\ӟuR>*vZӸ tL'IN  (?w*Rg?+L㾪<+e +9(ퟷVǧifl^E׷- Ws3gV}ǂ̷oZ@#>1qNԅ )ɣm'_ahOL8%oO"}OlzVΆ/hљKopI;&q;"im2ZH0*VrQm !R:@Q%Au}Q©3\ݒiO?pL.^xQ|'$ @Ze"hQl2[b媬KtI򷜑ՓKZ_4叵CUލPb#[S&21h|7hJUrRm$r '>2k>'PEaH2hӧ.0H'*uBhdhd#y{*:$OVH`kl9 ~gލ0 T{g@T:^n4টɿc2PӝBیKLc 1 $kbgFeWyNj%-+%M-15HUs cA8\66ZpOP/Ƕ\j_$ЈOWjow՗ka1 8" KmSd4*PuUg3?Gp#Иs:(z24g9DncpX`@u8 _ke1eQln":pVmշ!OF:se1K.wg )Y "6 nOgXmqDfQ8|*:]HX'fAFaS X|Riw6'EI/Jb|QabhAD^R@D* 9QA$+6ljOZ_| msVόJZ2>JZ#@ `ݬn7?kjE5"J$OFiMǎBNzHd9OI)M?=%|=r}/$puxΛow* ZKq JFbzb*лL͎ܰ6$\rRr2[%sOxFa+I@9I3񩩱ٯvֺiEg*E;`Ɗm&yF1Qu (e 0z}w>FVj^B#W님 v§_[|У/͞粟yG?UCr.RЯvy-*>i)x8cLmb}|_ߗ9>KBIAfṳ7 (N>gx]K|QٳX'nL:ծwYyD֨hVѰ=Aѓ2Z& @#?FV;Y]=]=UJ&~T潁I%'(+M1PHD͍2&D)PPdD՚9jrc k\e7A-Ir1Uߺf|Tn5rx 1TI)0vQ\; :G1 {C~BDHz=,馣۫2f#Q=mWLQn?\]^۷Rch`8LV `evY0Iz%-t|NbxF/C}fC+u-ġ_w] Aıb/yݻwdr޿f\9"~yBY/7Ikn4L8X8G}>b~Om? cVKws"*NhM9iVgOmcDʴ ɜn{k,Ǜm}=؃T2iCNLT0#xK%ʭ5+0#CNJ=\Zqp_'Iz>FuR|M6+1x$Gk!HU4(ƦQ CQbR.dQ$oEuɻ'+laAܷ^`v^:uJ1*@N{g*!kP&XyeQM/ka&80|ْgЮ &7pZG2Ro*PE-| X &Co*VNIfx=Ŷ ߴӞ-{=TsF_>WJӝErgԶ?I]CYF Trt6Iؕn:K6똋~Gaּ42~Ql} 'ۿ( R/Hy *@ Z}76,-y*z:7ZǕ&rqf}㤾[ZD#խA.ޙH_yN?*1.UCzi.$ko_I4vSkM;Up/oTfWC>KC$Xկ6iQzhZ֞m/[m2Bt*A/]ΝRzҜ[莼?)BkBA~ cG0b@ݧej4D]qOnj5z -n YpTjjx\%4mfv*_J"QŽ6֫;L4c-oo?|xCb᣺jm!dng%X1@9]m#*S\ߙSdGsS;c A]"UK,a>k0(A.{앹B/LozyǽD , ?{F忊?fLj3 w&$Jrozm6@ݮ$iE ظ]uwΩ:Zl2<2, CN3Vpk!^x | V j Țc=>!qIT"r\t,[2ƍ KV~*ݦz4Fc<"-M 1E^17JV p< xWy:}`-;b :餬2ywOy0+e.L?L] [*LsgRrFEдcXPqr4lj*Cjv~0nDc:K.u:m/q hRw]]a2Ar#HAt(KSӉVMt"Xi2X+tvrcOg96Ij(N`AIEZ:+fJ&>׀60rM9irbsbޯKNxJ]+s@SR[R[qLXh22::Q *Cr_ig8Pђ1Юeޗ2 -" ;81Ԣ ,i0E7`C+A\e 6Ҵltf/cڥw`*JiyY]ގNa~]W rzxIMq`ÝyVz֜)\^; _Sy<% UD:ڳɠh0+Di0+OSyJ(cZKX-۝L&\Ք4뎐6-Sm*d? 8V~ 42Ctx2=.:?&aMWԭO$Ҙ,uބ c(0e)ɤ̌cgCr|Qj^+vȖUlmySX)f>WaCJg.5M2cT>ץ2EUj*8B"㻁pq**,PJ)Yo/94#='%޲u6h)*)֡%c,:HIQV En4Xc;?QVz;,^qkNJWJ quhg mx<9̧tPvO S).iZ5 Nif@Q(6$ϟl6l%ȒhhՕ/;-f44wԺ2s[ZG~f+(B& 6ht6QMh.i0Zk6qvphl,Wxnmp@&tnuLm'""+Ebn&,-VTTp$"@\0\%GpLjڮUO+O[djDD;ࡳ܇>=#Q65߾$W%O {黪 ڞ_t*bk㳆Nqfw16-R#'#o4tz">uh1F2R>a/ỳ7u~>lpEWd׸<E\\2 %O-u 3k|VYFpbC6ϤxZ#rq#"bm`z4G65h7> r>V 0Bֱ.F1FanUUO#)&YʶiHi;[\geރܱ>M8QfKI_8YTց8s IJ.6>Mq .8cLF~!;mf(SK4ǭ<8x^cpb W &L#bC)0!C5SX.t5C!TS']cG-<%b_Xs|~X2RUDdb"be؀-Fdy%fn6OWx +|H5 oUQ }-#J q52T*D^b0X2 5V% )jCvcԠiH%ے("hta|˨(M3kA$mMG%]9sIJ;bs%3ƛp06 tW"sʿ5cM֫A``dƦjQytO\x>iS&,қV ;qTjmw ?}KIyK,$%lT |<}]cK*XG7OՂWB]mK1e)} 5PoI52RmEo_J(RJ#GKX^wI{3Gj,do(|=D[U xջqD=^U^h%t;M*Olߜ™~sRcz *آ\S#d|wm]9Jztn5(mŜpfPDJbHd6MQLkU4Jy.%/b"e|%<3w\smC'4P1H[K\Cqt+U*{7L5!+sj-3☶uBKdMNwcӋE' D9GVj '$C&h x;P*Lbmab9gu / 9J;1nai1dVrTiڪ%_ZISɨaTQ4eg4d5CBrc)v RLcj>W-+~>*q^8A'>WQ擩-ìE_ B!ZZp tCcVʊgGD`мeo!M(2A`6:A~x:c_76Fhz[춳m듦A/za9=yhL[rG IoiQ_G.1(V"c,E{l+A1/W0)mBέYQw.V~NhXuXGǒ$F, e)pUmQ?MxRИyh2Z4\KbxmA ]ާR*ىJ D$%|jDkoϨ5j3 ZRF:LDF-Z2B76u P0m/%-^^+(.G|)z`aeD{!⇇P* ^i;QS:P1~p$mU byc&-4/%+:-(͜Öq:MGgi?2rjn?Ų3 MRɖʒPywnUogkU5}X̹_OWHjwte>QK  |~6etoMʁۙ4ffN-]wl-Hb\}]ZIb7']wvKҵ)o|[RZjٞ'h;,EFbI[rPoo^~/l|b32W_哗/,&"}@\PN E)4T;'?Gt6`wW>兝YI H.d3Cd VzA`e$'__^%a9wwX`}Y`!X8UWYI:i)9'.,1@96R>x4I-w{khFT6| fP sb0& X XaG,krTB;d$I OfupPV_0Ј4q@*-,^_Zst( ød`yvh<[`6Cjv~Ќ4֌Z6 930NN$5OAHO'Z=DTD<1 $˧ =" ,]`.{Eؽr.: lB`IbȊQ AVRf NDA=Vhh:,1s=~<{ Bɓ"1{e~EG\ns#?9-ɇzD{z`a/'e+ *JZ&( `Y|piA Q0.Ĩy~4Y̟;&F[$IbQxQQv9`8dCTUF?<|8xL;o&3ϛe)޸;>w}nw?>{;gLF}v_p?v?}?vy}vxݽ~&4>@/{O}yd_6&vW`vW{b_z __g%z//;8:|I!o?u0'Y钁qȲa(죰wtC;fe< X|MAlrwcgg!U;N/yr_w?vYy+xs 3{SUUɤ_~qYd//|1Wz1%GY?Ϳ+yjyW@׳~9v"wE%LAUxt:ygMm6,^Ƚ;40g tob;PwmI~w|$ $,%d[I0}/ѽH1]]}NUw֒mjk?dvoI*INp{~ c+_sz~<ޡ^;=}}N4U* rX!μWS>&_"/W_~˓SNݼh+#[?;{q7Z^p~>6:+o|CtE>C8 tj?|y8M.?hk;p5Kh}.ZŋKuiF]mo1_cgFůSPaqpwyb\:&o7/o?6[u99kS}i`_]D[y?g4c_>1[J9ј{` -IVzڂ3_\rrqAn`Co7gm ͳ[sOͳyN_vRy63Gȴ@٪K{z=jN r^9(={r+\ˑO,`ƾ_S , *[j,x~"0WuҪP:DS}0F} |yƒinvq 1}ض ZͅE18>cEQ%W8!gaa@\5vA5=б3w%P:d#cN :]:@ eǃq9/-fN|ڥX9j\ڶnЖSzD+_?}路?_>O~8z_ꮘ9Q+lnsߜC?{ՊbLcU_򪤾/茞zuA+hx%L'߻_KqL񟙒gኙx (*~ҽyUNVq P'om{w}iSt ' UhDZqk A1)]IQ$rrFFYdc0JLezƪ4VQOcX4V=Uqp`dGc=c(;woݑ0ZȐAecf f{A%D K^xRiJ#h:~Z~\$؟R9ÐB\ >AGpIqe 5YdA(SEnˆ3ɴpjRj4 祣YUاon~YUYOTs -g w)))$H.H%@{.Hz=M[JI D4OJNߞd' {x99m̀xc<$U@"ۄ IDkPaXVD&|\@T~P8-.=`"g3`a<#7,=BTkaC䣆CLآGEh` I 9BLGeb4s4CFYiHQkHD ;TY$DK>hK>|,I5A]Of`n{g6wfSzg6wf3;̲&`ؗHZ䌆56Qt#Lr6h]h,xGEVdbLFQBG ̞+΀4"KKXO{]ݨhʶu-QH#DU t Bd)c 3c&6D4 mpֶVMh~:\q*D.ysjs ʣNʇ޼yl=Jswϕy:LoE+N˹1AFg5<ӌ6*,A)2FNk&he" > BJ^)k FZѐLu;ӻ%'-e]ZzD~:dH+4`$'PI%6FL"eᄂ$*`l-‰U;sdѐ.Kz`uhYƒeem|TѻNʉB CxV赢=^0Q сB3Z7%κyݣe?V ʵ5IWGI!_+LjEA+&)8BBɂd4bp#4WSf&LMVL[KZkACTlfK!pM& (D2s Q5N@PX$dDbx#eJ`hֆS _Q72URKI^N.ؔ `S2M6N"K̖2/~tdڌ}ouǚ;ݱ5cpJ]W 2Ҍ 8@ HFu(ܔٖ)@R6DPBx9A#O< /eaeZĐO%96Q01N[Sf.$Ie Q% 8׸ 4ȍ][1Z{Ga]]BɼX7{1scOݶWͷٴ*}6=j`8xom;I2#霉!Xl<[ށsZl0T_.'ڪ일Cm<_"U^i\nrY_ CKRtQvL +ZZU't˖nQ*}ܕ֓wZmvඍ혲#Ѯ-VU5XUv\{j<-MUSvRre(IѰR 9A ڲCS# 9PKM3;4EQ_Dђ2ȟ:)`a$2mkwj(&N";p΁zbn+Dk拼v$$H$`W"o g=E^\JE޲w¥M̊V-Anbjr?rጛ8v$Qo\p\IflX4j$K Sajrr&"G\YۚQPIMT3ȽKTb;pCU.PHK&*`H +5yss5ӼCH>9~Jv~뿜tBiE PC(BtމhEM(LAV ;0ZJ=zY қч\18d'qH?Tq*Y!8d\8 >Ydڗa/ⷅʐc1!Fidt2([ sʐ@$Eda$7+c]=Jn&ݿ'ec_FRIc*N PzC&f1@Tx4£ieqqvB,c2Ź ]]5*_|u<VZj嫕^HXHkoZ 09;II}k5pwjܛ\M&mDjGDŽtGiQ )X=qZN?w}rڦ~m~N>F^qCwp>M pzhEdBvkZkH6w)d"(<ؔB!*fWyc,wS2{ ;p"-<.eç>iڧϗ~iiiե6/L|un`m٧},$= (:鯏< 2x0^lY+Tkk?~cw}nsۀP)# o6Xd[- gBFu;9J [s[/WBmeeZ x|$5Z}3TY->~ޑQ%{8nϙ(KGHa"*1d|P1k˄#r uYj)lJaWAZ Z Օf\y5Q5o~-,yʃN W5ʕy !%/qBX/H?B?TI{*fW|֟Aa;xt]DrV-YoZa/]eڡd0"d!Jd`TLʜrbAtY¹LBn(ٰ.kdCM6UmɆlɆlɆ%f"|l/"K bL3 MV"w"׀ڃqx{5n$)+e8ǧ#sØ0.8#BAL!̐p荳d-0絖e,&P52p?V6 2p*Y!Ù' Yc.>\,d8km ʌ7lK9D6pEm\+]1&w&&hIn}FӲ\WmQg>yꡣ6#v*7>o jRr>x0,XfAq+9L)lF,6*^ M=Zqjj&)5/UMMDMMDMM.51 P4*7^IN5ٹ( ld ˆ1L2m07="W&x~;twe $aę$P(?;!9X3 TphrXPN:0XJ3riMv~>^>؞$pF䯒}"CPUW_%$XxЫT/d~8Q5<3(sS˕ vYf;5!]}nUy͜8~׺wvyʎ#k8e/{}XXE#6~ʴ)R")N=s);{weU}W /1\^l/QG]!`T4f-kXg%#P\5,?;> > ? ^SSSSS $5 cʻ"P5AۖbOёVE+!F4/BΜ޽;?Z3OV~"|} 7_=~~Zǧ'`K^?u%_ر} H{s?Vlf_U}V#j7;ob:~0w%?%?%?%?Ymbbs؄ \S6bP.Ic0yBKuk֙דJ^${0Zo\DI?' |Yf|z=+{?|g/<rA ׸n?OɬkOc~%^ՁOśӷ']:wM^GʶMxa24%Cžmp\9 «TIS0Aq6N|3ٕga8X&=lDc9''WwL ܧ'{C zv%Rih? d5 ҮUnHE i )΂ -&F[X#ד,}A5riHG{إs^P^֞\䟟\o՜곁_2Vnb| P/ԜB1>\mF,6>COzyN *oQպ1;rj[uSDt;Y<8C"pJ;m'ێ˛ joW8ٞrZ9[N5 ړCĴBOA3CF4#f̓xMCRP4 ,ggD3RʭDorSUt6jU nfEsɥ|aMtӋnBZȽ˖ݍ*fCof]lq5,JƆZGV$ΑR-傑$>qd,GX׻ qNqgz1:Λ e"ސ}6Q̃8/2 6$3#`<3] v5MP޴p7{'q.QKRڸ#M&Q Nl` ,aB}(崚bPqc>(4Y=72 c#6SOVk6ιSmb71gbR72ķg`mNoܫ]֠L4{ ڂZ<?6c[+8[dx[&%o@ !(Kjb1 п*Rat6&F&fnbf3m63$q<[;{7c+twVY>jp@5ɵ/c F˔>{ং./֪ٲ3HrQO3%^t{c|)ƭc5q풠;٨A`v%5=|\?LɶrR㼦HivY/Ք#ƫ%Y<ɲg1fFavps/Os}OPpJqڙuPUe We5-5#5u"jG_{DZuN}Ax4͕ʅ0_ *!&YQEyGhRp@ jEW5d@"+ \,TXƑK.Rj/e·G.(Osrݗ2 䁕LYlG(XCjbqX1}5^ҩw4(wVw1{WR`pV{z/l-jFN6 jP Hv mӰð.$KB4d$2Jzq[e ,oJRbX޹wQ>b%H$ɬO(Y`c>Au 3Jf~i%'9}ux#=TLz+<3XJEh~|s5nP sP/::3b'u>4sB,-"E|rccqBtְ+@ J0^=֣HAtr7"`4SPp'鐛d0AgC19PW}2`޻wQ~yS7u-ח^[Ôi'\2 Z=e.ru/J {QV_xsQNW{h֗~EquyQHP9yVLk& Տ7G72D!p 䙆7J!Pެri"ҚJF3+&~Fv@kյV۴p{(]QyJCLGt:O5u.b> L6,v$K#h|&ػ&zGeoa3zw>koN_S,WAmbLvh"p¸tKl L\9O><\*UL}HWLII‘`!cT:rRSt4*I“[?!B sA _ E6,@S45Ά+8$*$ r_8,Ãx;L 1*B>Zg\de7&+1Yٍ+byH"I.&!UR]umcs8O,l&=Ǿ(抪2>."/?X:HұcDˏ|Q:lB3t]5 I}q껪LOm U5_Q|ు^i`}t`}hHRi+^p/T %KZGGo;+OJ7B5 Xʎ&ڦgj"RtE1F2yْ5>u'{K%U0I.evK`YHMjNPznH5&OkzpR@NUDF+>u/j3%SW4Y(AzHoY/ym&9!5+*,KuRi>]R12IacyBf4V exC@ԥꂩۮKɦ["bװ=ЀXGk>2Iu}YuvIL"Ȅ摂 E`Y=Eaf薪)GqƙDq׮0A#&;O납 m{*Xp|E[԰RᦆIMU`|]D:.lˆgM2ILb j+TMh~8$?Z%^DS2(ě5xL h[JƔt-~ɺR$A&Iuiؐwkg.28EDy`Rp/8#RSz1)ǁ"(,m;BV2Kl5|@E.vrM-¸RTtS4KI+کQH221̂̐tPv=zfzőC=f"RG%ZMD}ciݣ_$F>8de٧k)Ҏ|50~*q3أsR TY"؃Qɳ%Ua+I$3+*vά‚ u薺In5eHlK BnvBX+o* 27X'.|w2?=:RrH;˚sea/'φwFRr-#Xt[dFؚ$e+ɞ *XV$GF )>rdH5 ŋY &x`-d 5MGDrsZnU[o<뼅sʓ߁Ph 96!!:H*&DA`;L'F`{^ CM̀u;A7yD$rC߁5vދ^yM&r5?P7G휿)b-xm紁3}c}7zX3gouXe.w4^iׇVsܺ_Ycgۀm}捱5 Z/ʞ%bPtz 0! .t?J"ț<8&}YQ}9B?fp`[$)qe[o[<͢Wo\,1jx '0/&i:Y_/V&|p½8v;_aZ\/(yHno_abװZ5d z Bacl@vozd4/R dw0ONn>n{H/2z6$(TÑx+Ӽ.ݛDU?HnAb>n:Mx=/m2enˁϋ9h56'K !G7OtY vw|?؋@3jnpNdD0hi,aznrʦ5h]u1:8l7ȵ:KE3F21,ɪ䃖m4@ivMYIn(.6܀u.N !qW|q47cݭffci 3wV,GqR ݎ;<'}Cl'XzU^C]Nlv-vnk>0 v:a[ VuWr ձ7q$/F7ВSN ϸ0ɨȗ~H8ya6ճÅ%TEUqߗF^^+-a6oL d"ڳGĽNU{ 1V} Pm7&g̸UcÅgi|Vyۯlt &v<&P݇w%DwⶅW)[.er U]F*A݇ϡqه_-&vـ[%'L 5+%6Dr$ RaN=qHG!Ԝp^i8X94?3ֳߖTF6-~`K8V`^ڇ}sWc h`^sjH`vhq'$xfMguzݸnqۨ[}rÀ hxt­Nq@0nd`fnYh6 ذbC|*>{ߡ6YoF8۰ s9n^608Rnap:`PcP-D^ްnelBm~5'v ~a4z tj4#NTVv]m:B^Oe9ف?rE2SchȿΒ&է>bv0 o IVyx@X{O+M=AOxIT4RD+e0%I"{^#k i/=^C/җdȶם,Ԑڃ%@"/=ʑ) ;R;0mOj}5x0? ;f $9zev|/fylX *pDtH k7{MyZ)PA{W8Y7hDkAЈQ0295 % |6z>?/9,H;l^HU /IdyA䭾"Єra8q2B!i1\R·8\`i=n&_Es ?)vRMIvcw醘~/Ǡ׾R|ꡟ3w5 O0u)ƝOYv-\XB:+&&zj4Z6h2 %_ tF$Oю, `F31fo`oer;_ZG {fW!KA3 I^lW1IQ\9'˙K=. <.LsʹOg4%觳&LOk|:;fihyj)ud.r_U$;ּu}䒒+,_ӧO &ߖR;侪b?ֱخ(:a dT W}5_ QOu&Re4GuS]Ð'ݓԦ!,^(s2B'J(zW"]S_H+MMITժ(TE 2E ^.Y }UIQ$**ʠ %=ҳ t-E[UeC֞$=njBjT#)6J%)e|OpS۝_c 4I 7zU;# *|rJlk7gK `~$MjMiOg J+ˌ젮>ztXxt´+8|Bd2q+4! Dq9[8Df3ʲaÌa`Q0|鸺نd֖s`/4t3g`c^%UE] x=J$&Q_#K^֛"MCþ X sԞ%PgR#P'fڕ :vn]m0KcmM7S(J׮qʂWS wP[QcaKF.ʛ ٟ~̄] vUuOCo693@VEP_`d Z#㑆V[E[yql>)hQ}*LS{~+ *b]F-i ɦ&VLm]vP? &x(II"#wOp_+X|h~Ks7 >$U`Տ]]/ CYeFӇV+Кnz# ,"װ1.zZ[QhtKRIYB!O%,:u 92s|0e;y>4+PE{e]>DM'[I (r#23ەQ~nlVɃ e A5"T jSCǚט;<-L H;lU.kRJJ432sŜ}eSǐ^jecM#{j2_,Q7|3^јRItTx*71 ec/_:v/c8Z/0kޛku/szvkIfΖ !G9Bo.3x:685ʂF`LVp~ K3 "'o*Id'51FL"ZJcԺqygf: Onk \zB L s_oجx/6Y^\_ EEI5sVV3Q֝ m2\ 7ħŧK8`3ūk ZvrO1Mt CڰϾzvl&a[|RF*{MPYfE ËK^6Mݔ;G6Z񎉢3~t˫8csĀPc~o;Ǐ"Jd b\峰(L"p>lMכo E 0 D0ӾsN[(dϦkun[5׈~`S/aFKJv k-n7@X; dm*O( VR2qD; ]oABV *Bw76ϼ-sBM`@;IaEJ|zbL]s#=R`0gs::[JTѶj7eZsJt'osUGROgZ3cZCč0Y0{ohپTu>,o9q14k^{ =EZSج9q[1*y|PIr`BI"u4+{;ÔsucFk#n 89{Yx)M]s.l9 f;fu7&vNsE4xG}O6*!UcC3htV&BA@h . u9/~y ƜG7\d ݄홏lFV2 Io3.*6E`oH\rG1=.Qs!<֌~9I 3G3>s 7=/c꠳s5"hf-BJ[A%+z2=h!lb%v7H3~M2'_W/Wd8(gJv6detd@kDZ@fVcQ/7TKV2sA6Xy̶r^* WGC΀#Rx+H2o[H׵YZ QrD]s Jtq"3A5%S$a& yeC6^x3MH*Vlc&RGs?aݣhP6}$ ·?mLnCd}0} KdP{4c#_y}~Qb@1| ,O\Htc~b!mՇYl-mN ym›W)YI-6𧵁? 馁_(BN&GQ1k#Dd0`(-fAP,m_J|۾|oU 2=PlNEI!q\+R*ZgRVQ2>U'yJ-N4a6cx%\k-Ao؃.vjFj/yΰc;"prOꊷZ_!~-"zZ mhFYȲu2vg\ȃAv痿+ wre^t%:T1<B^xyCH)yMԡ(*`)ˠmWLH⃖"ϏUB-yL:+l,{;@Qڱ+sOaM]P(!%ĐH9:ё 3;hIEL8vH;"|JOfj|@]h}BɠFUkux.[ 鍿,ݬyX0=/B݊}nV FoO/$ -` z:瘻ruwA;Vqy`p3KwWR\x2(5[ަG4sks Ξ)# !ษKaIR``yR,4{3рrfVXih{n#R8,y_>GPB(o1#+bIhn}ֽ@yZ i-٤ń耾Y&&syVjj'a eْ| I,?n)O%2mUFi'oS-oMVjpvJWþH"-!>^^Ta±[ ROcgyU=}\~]khdRY}Y,I+*>Vq};6ښ/>$wE#ee^zyym̖"sgʲ]ЬAɒs͙d IjT&'QRZar,>ؿkR2! mdG B'$ H6܃OچQs:ԓBA05B77Jkk˝h,Ȯ /ЈVctLFkg]`Z ZeH0{e VpK(`391d.B28L&Vb*%QX l#fTsl<XqGn>lՕoIί{:f"F6f~w'j_׾_u%kv3ڱW'8 X98@KWw@|s~N¶g/h}Ǔ2&Z8 e8w׫]]s!ﶜ5+xȟ{x@#]EleL'ІF_722PWXZ;uX9<2JTBɄ2 V@v2נKy?CZ*MZ[P2 C `UãUY{$*$nD>& (`^}Ȇ B_ȭ)1ecbQKԓ'=ǢH$ObQq,F(hX[DҘXt##^OWbCػ˞{ ,{JD]8[@^*2 >Z;RgHL[NFRDž$'G-b uZMS+M兼f);q{HP\ܞN†{ru VP1DuG$>Xups[ұledBڔw}FCRO$ #SO0264MmCs%n+]\u_j$U7bۧ,M~\`Ռ`A`l.:HbAN>3}^~[H܂x)cdc-3v = ޕq$b0S<"/|5za,JkȦ<}#dꬪ.ҴaKU_QYjYeJj9bpSL-> Usgr b x@YLWƹtmzKՂ9A@ tp61)O}ӝ9Sw қ 7Ļ?: ]DX[`Bս<YԳ$gIPϚڬ{qR 5Vor IE{0=`-ҵYu-ޗcU uNוH 5"Վo<](6\lgw=Hn C4G JK0KVfx*`1TUtXpj Z ̆dQ[!g񹾽ظ^zmnlQ. 8^&eL}{\v o;8㌏γڻؕi@S)a(=2І)Л e5~S{NPH` a\qچܭԬjq ECjE+TS[j=~44Lk'7}ro.:-fi7^>:i6shـ6(ԩFkbT^Z1dpָ#`JFA kΐhv5h ll:l}jw~Z#{Ǎ*k6PL(S.ev Tb#ht>K1*8zbطYF)? $"VQNj3ayEX] 6*PyR?޼R& |y9e^F Ǘ}][VE5fU^JIh2XUZO-pgEP[ХI`ј7߳Q3pa;&4zUތa,cjFwnOgdW/ZF2J _ vͶ[e1rѸ+}2懰,g),?KaY3,o1KB%YK]fd\ dPК{ԇ4?Dvs释ʘK?~0F~X_B]t?xϒ>K齛W" c<Ԋywچ@0ADڲ__".m$LvrNX;ч(QHqkuPBypZjkpV !I&3' jmIp@A_g>CDs.ЮaE<-(h#XY"MTRJohhC66GHFH~Ȼ6.,}zY mL1 ݜXI>)7>@ $ 'HQ? '94w4x&2j'dIMӜ5=e9!jFРyMfɻ`y*o# ,H4*&#UZ;ej;SiSkrpۧHᷩ"`M%Rr;0Vܘtы;Ȗ; Ej"E=*9Iɬђ Pu&z$7hIycw˧+* 㧫ؾycЍ]=^#w Z?F{/WۏYY]|P%Ι tWWKزŏwpE;d 5pIfaz۟ۗGQ锣$טʤdiB\ܼ;c\__}`gYfŐђ6ipkN {%eQ‘70} QDg __wO$B1u9lo*fyҖITFF=|iMWB@Dg' D Rl5d"j\s܆ -0SluNUymw*aC޷z_p|ڊsssN!AΨn /\/| b$/ Psm'd{¦vyڷG+ZB##t_vaYh^a"LP9FȁΓ` "F[1o`n2ɭYw{¬GBA|v>{+xzpfn_[96GJ9xzp }sTÈAwf:Ioc8~+Ņbʈӣ8 s ~ʟv{ܽ[~]!B9AثgY 0eяh`/񇙽Hݦ?YiUĪ?==6DC4'mޥ՟]˹$}"ڪWMvղC*uHu..+cTMa:)[X3e]iKuT1kv֟F)5}G"eE@;ը^(h o=Q##HI1[8İ9 (duu)mc[O1m,dS52g,M8%93C ػ*CK0NݽU̬\xqA^̉5 iď.wkBs)~ϣ;IT>9YY)]SYù>4g9{~PQ #b%bsW/#WX7*`rtthc#َ=)sh9<͠':USA9 B>p3{R==hKSq炕c+OeA0QYW6uTFV D0ϒgfM@2f/VoQdE)H KqcLF͉V acBXgBPCt;Md-pβ(9cD5.rg;" ,ƻYJ p.q{dT _1ħO:yy=֞||{`!~SNw= Χw(SSү4"SOjN>ޢ-gj$$Ap,0T,)A\Dm~,TȴPVb,$ŨQznA"YCxG&ZME팱-}SfBPw<°O+~:m)u\Qp>z.A\@@邎Sk3}rQQz41jɮ VW  #Z 0ǪUye[ÃW-`!#Dp]xzpfn_FWUT}¼\¼V}U(׶er22D6{,{שv8>j+"nDぷ,r)nF))/+LTJR)m-TfC5(}@~Z1Zej.J1"7EM@`Yԡ[gx[Vw9Sb/NQ-:wƱZ*}MepHem {C$ҌUK{]bZ##юdG1.w{mw풣,Q1r$dQTxPZN@ BstL A1jY3֒(j̢KV׎T _n2-#thvZZR@K$4sHՊ ( ZV>֬VFyU=ԓ43QT$IGd!%.˽vyPо#iLAwHPl{/.A LKUč;P;Aq+# k$86Y tG<"$рrqy Wǵ`5i!ܳ@P^j0DE͈`UJH4PPGD{Ν $` ςuKl6xB`06 m'ᎻH-$GpC0 3LYmAR ϻⅆh,$)?qPQk'2֢qKtRjVT^=#jyv,I^_f~B˧'~qjP _3o{x)WS%Ek 7Afq(Ÿx~]ޜ0dĒtڹ?gӏ_`|{7[ Շ͡mg~8Ixۓ?SSʘ6(y2@uY~0xf}Xkݓﯮ]- (N)5G;~%[_& ĨLa%piqFl9Aˆ@k!sp)%@nև3[\ʚqOl9At1+E>X,- #ޭ/EZVޭ9} -zxn4]vcSTPй9T=1R6_mDO2С&%wT1ZRZ ܄Rq2xa~rxfٸw60l  WfK:AL8@Ic͞_n3DRʷԇIhe1g![mBlRDԐPDrcTj5vShg]9tlN T}Z/ŭFF[s;4գ_Z}[%?^gق]g,h 0>\,X,#R'N?VB~dhZxe>k}$?{׶I\E%.i~€>-Nf-izfm쿛Y."̬bZ۩`^d99:x=x7ؘ[KzUOͱgv>n,۪B@j&OZzøRC6ؓRVp^&qQ8Ka33FɒKu@'C:W`&־ Qo87vom?|,_zZ9Ӱ]'(;-p*(17ߩF m[PYR|-UMPjTU׶d %XAc U1.rP[nPHfgA`Z8[kD}lʚպԮ*MݖTXFUZctG(CQG?@)}\zW-9|k3$A"z7 9']l]ew\8z~%]֨z7b/5$ a+5i.hIn-.sYFNcyb68@4MMkX'lL3lrm݄yy85 Oؒa z8ٌ}b%^<4ibf oBeA1433hu\"GE9GKv+>a3Or^$w&ȸi2{ZIÒ[8%ۀ"(j'oꏏ[~6RnTc]y mTbm<ٛc4$ #nN.p,8g{[T{qFb/H$ؔ 掭Ln97srO??V㶕T+Z7\b؀roxwb$foC(YM탉Oz ܜMl!aMKa )o&Ğ$ٹܚ@de L'(Pe!=sB 8}Q5 ïSJ(zoO0_H`ҭū=_oׇǯIUO+;L֎sڢ;G#Gƪ"`c)m#^h8Spݏ[LQm/uqVw. G {q~gf9./aV-wb X&^A!g$Ypz?saxU;V׫ .>p*/ k3t,-0 FK_0#o=N9[?t4Xo~U! W= Bao״ $U4rhE#l^!@ᩘL/i9s"N/k #Os1Xߌszslڙ~4]6%M^@ԇ5j- K7rW6gquVPsIv1`ϜAݬ] n?Uˏy|jZ߬߮롻z讻_m+EKXVbJQ[jS.*M;+6X7!,VTkמAהM~˚7?f5߃i͛MzxN}^)[v6}Kma[r\]U<{<0SQI0Cp,E!`Eȑjֱn%BjS;jM K~.7@ vx?^ڻFE  tЈEJӠBN5~Nqa'F9a'8<0{ aQqzF c 5e(CNΦしZ`0&y 4鴒 Sx xhYQT%:)mUEG{e QRO4>|;C"},K= >N itZoA4@kSiT(@l?ەo| (t_eoh;eGm׀M%&DgGۇ-O (X 6c$5Dq iR:gɑt@\Z#hWn*@VRCʺیӼtα-zs '|P ;I}B0&)zGB#Iҽ1ad.ߙz8g aɭKjeE$̀mR\0܄A>z6EDuQ‹| w0[V)EC3v>?{^[t}>{^M}m;Rq/-{{;>Mo  g$z* -i0E Y2Ydl>eu2S9u#ZȤzJCvveTʦnVnu[+DvS? \ӳ,Zv,PN3D{Yg! |쑲9ą(b3HZi:g)x],!0*\Х\dFW1ZԅR PfY7u8^\1%.P1Py ,FRv~]KےY׭X(Po+{ to_n>?|sנ̓7茝/߽Z~ݻʧ@HfLHg.$ȘȐFp/O&H= f_(Gگ) -6)%bI6e"ݳ9xH­Җ8QpQU]-A ~UPf6{]ȫCǛ\<P!M뙃<ǯ+AP |;Axɔ9c@ry&#mN\D$yigZOVQu7G<ju^_%UkT&bk b8o.2b:py^ \Y:^D9ՃU_#HD+ 5ZYK˽mOeow]][KUIu% 7eꆌRRնhZ*-r v8 /-OPRiyOͨnȽ޼2Od8p/:ݢY߃5-ҸXʶ,;[&)H)jmSJ)IN{p"S-AEn%SU |kٵ΢m])š7* !щxx8#-'l?2Z}k߭s]~{jNtdFɒVg>hY?S{E:҂S=,`hFJ]ƢhRRݫ K`WoXJcPv}rJc rgr]1bʺ cd#qG#d.k6|4Ze5|N昏ohIB'ۂyO ŶÚ2YV'tUV p^0#x!a^=HcɼI)T>-Thmmj vMӠ5Lٲ!(biB*֞X0J@]У .gcDZ{$SFU㶬K.K4*ʶNq[Yg\GIT.–I~>yV׌fp,NzܺAxQ) Kr/vzJ¶DXcnÞԘjE$9_z,ܧL XB\.ޕrzIF 1r>XrCk`l/DL4´c,k@32)6u[sj}|UvE+6>Brù6~+|m B1W 7RAK9k0r.'4v9A&ڳ쒵?́i"p_&0׈8 L7 gA.78ɣsa{485 A S\(w74׾ygrF5Υcw%1nvA <tی-%6܂e U^y1HYʷ>yɻOu}'+@c,㶨*pmL o,FWV7+JR^Iɝw5ULי31C樐dT%caS0:7O|6 caaDc).X82S-пVL5qr..5NqޱSMqG71ȱfZMj d)|u(G?Wa>R=Iz7:]zվ}ryZ#*,p~1jιIwlSl&ljj/|lSlp̈~De/Mupԫgʚȑ_Qqw‘@Sez;z#i7:pZlɲZ٘>Y$%/ :Hɔ_dB~Hd" “ڲY)!'z0P'kj bx\ka~uR~ypJCV؁!? .`8AB \=vgo-N'Jʧh6 >|*GPpRqJ'gQJ `Ũ`koC@!jb|9GGn! 5B| #*,Yh% e ,ZLI|{O,Ro/eAl~*)8_O$P4M2Htܑ!1,:UB쉚yZ.Si6 UR"nT359IB%pbѹna?.?C\sC_=;Gr"9+̀TN3&hM9YLQ F2bo `垲}Ҿ^^ӛUFC"2*#a WN߮0A)@L(!mʴR -Ĝ_o}v!@ \9{H30<$Z " Zb"=T P$P\:x8 l- NGoX#(1| t2F̐QyГ!pi0=ުT]Tٟ> HZPI;'O: gP5trm<AY%֖ ?~ Ђ7ǁՎw >5./Tu]>_H%6K|I833ʩ@SB:]ur*_n>^8k|KOwwq=W_J],߽"pX@|܄(~`Z΂MIre|8o%"`0J 0,QK@ 5-B ! hZ9j)Pp)GW1\zߦ*$I# V"!;e=jG R D2 @dhRNrQ7愑<җ4i% I Fk$v*Y 񔼩kB'__ݎBaL#gJb=ahŏ)s-|bKf>IeF1Z=0AIEi@c:Ȗ4dFs9n|W>~V'a}V6Ѐ[aIkvl<;ܷ0q:U"vA36up;=Gš20nӬh0baKс g#A:WC^qc^Х#Z+6KOR](^hew섘g 蠀0NdH7e<Shm Věm&5ZAw -TcCn{u|1;0 "V 4 k,/(mX;\ַ?[i;k-^v{f92ać6|0rdh=O[>:Dּ'Uձ4oWo'S+>pMinP Uꋗ} 똻bK·wy,<*S3!dYI)G4@TQU$unFN;5 3*IYjR7J$ #sp ((ER!&` Ŏ\Zzi".UE"n/E+_/ۧK)x;"aim;yec/J Zb 9Ӟ^>2PIb˽14Yh(s @0w4(& zM2Up9o@B+ 4C4cL]'hNL!%ֻ0vo]=w[N]JW[ei_n.]52j1!{Ki^8^׳&#AI񖱶E =+?,7zt.>(-mu uS4KSq!ūOäCQ$V+%%,}UEbBj_f*HU,DT TC>;:BوoGF/V=̮ZR81~x/;zj'֬zjN|uiZUzLekMϕB:,kXK?Q끫N@UYjϹt!zF #>2*I(2}`F]iZY8! E}E0y cۍJ|Y`R@.5Bq"4EhF (&)@ ęȳ1{`*KȘ #BFX.ioVFIMOAۺyqjz3Kw~VLՋO.\NCX"ϧR߯iSg7ߺH/yV=wg?M iKq͇Xfv/w{Z 3/gw\?Π5;Sw}6 gO m󏿐Q%׋hTXzd E9y5][49ȞcSKƃqXer>-5qbə1X Qs&xP1G&ieL66Z9Pdto%b-](}^_F[ 4xfa+j+yFwtfjX1Bp,eN{޼@ ۙl8lch֋8EHvW@8^/y0IDىX bm,rc&f1$H`#>*`m+xjCXf>8r%F%SYڀU,QZ ^S+epM׃,5 cHwFK9JچK͞*[YM}~6fjm{2c̏m}=k0Ƽ7'txӷ2Y (}p~Jl M`$ z)`Z,gʌz<@ן?{z)er%JDg*yfAT?lǭyR2zḎŒۃe:3D1<^Ĝx݌ ͋}cZqJ>`SzZ\1`kOZI=:6p놓*䆔h%xq;V*ΡpƋ]^`uV(2p*HZ Ed}skd& B abAZc{33NzdXGANG)8 `\IPK48c++հbg(Ȑnji#s<הaSFS#J]?~uOy;a_[.{ހr(4c gˬ9h[W‰m@uӚ[+6XĢEgwvXE2d -l^(uKgbN*X."fFLRnCZXE*6a҈ƍu7ՃåssYB2ad(, >s3n\eOb#.?']rw?[/4mߙL8>?D[[@aOόgp cx&p_X5a[ƛ)RԄqu.6o2RVsg|v\|o$9al`#"1~ȳCiݑ$[]Hu nϮZS,ƕ_Hye0䎙޳_z}rHo?.ӛOҜJ~;McYʫ~5VeiO9wϖ?5*fx+mƺ SD>HŔ"Jqf;Z=]h멗!Ӄk%rU)r5K+/yLgH|y̰1x' .#l(0b ,2|4Ŭ\k.cӁtܓ4'604u`h_=MGmU0NiZM O4*5mBC݅u*)KF( [վ|}6h46c+g8A[= PW|`ڍV[yՂéVYTkκ'*v ֐e#H] 4wTkMVf8_6w+Uqſ\'wCśl<+er{A~[__WUUb*wd(;2hS'BVd ep1AƐDNV;@lh! aU幙 a?*Zڋ?<\O{Xk䐳eln>§-,giK9 ?5, ]tT[Jghs*/O 1^2\J+FFxKtb :'bn+y3Ԉ;IW mH}x]c5JV(D\Lx%U){0Je9(/̌O gublaLҁ_%|_<պE2>j(+Ahɠ SɅ}]E[-:ɒ+m323$p{,3mvFh]( 3,Al|!x0!WFEQ7QoinikNxeV ̻DD貍9w'$H1˔HL71J LL5`wMÌA~}D]oRmPHV}8ޢ$i6h^L(RrrϴVa0ud`Alő2^>k5ְBzkذu҅5iǠ6FB3!D5 3&%2=و-2l5&`)18P3Nju@JzL#?#z@eܿw(ܟY" 9Jo8K/}Yb'X%tQn{msbA 4}J]i{%hwjxb-&8 Dkcn.~n ߌV=<7m!~MzZM C)h)_ryI][{24EŠAd[W>v띛u'ɥ&8xtă6T{5;'Htq( !ΘbL$МX/άaxrpzg{ 1 ƠWSئ'@݊} ;v:auŒNBN7-J'^ ; Y2d/)8<^5m ԇ%w˷ 8 ~ﳕۼ.i=IdT5_ Sxcw6_v/ٟwr'=Co?m+מqtk4OHUR>wt%5*I~#QaT **zβY܃a69hV (J[렠Y ,@{ mi[Y;l)TV= +}mhU_YZR5C-1`8o߽psUQ"ڇUާ&A?Q z>NQ.hJi!JA)YxUcW8*Pz8csԔ{yVr>5NS:sRj/ N}j5׼`7Û{3 p~\^IVhw jϟ. gé=cicw{f|]>]uc̩AbkK"%Fp赠`8_0JƆ|ݫ;*sIAϤc'ZUii/k͋}^A$T /LZL8IY .CeOee43fjv*ilMDdTYf2qnBP.I-U$LD ٢ ":ٳ-ξ2#u29c 509Th%83RgNZxZT\ʜYPH֋~3&%06#!}ZΩSP"hd'Lr#+cq&SySWp.N9(FNI,+D8h#K@^}I()BHVPzkp8WN@k!̴ u8kMJ£YZ**(Ʊ cfb*caXX:r.u1HQnib$&RҢI^",é&?{G#8rxDmtXT&$K9هT _ОwV3uVPp:uօNBV3ΔerB'ñ_H ~z#~4+z r 9ᜢF{Ժ>E,yCehn2T}? X>A4~(f=[?AZbu\=jm]5zx"nڒؾp-tccv1n78p9T[8퍂I+њ]?Ն;%C0}w)BUn$/އC "]kICb p:.KUh Vzv_x^w)/ )/nny2Icѩ)R:XU`= 7iqȇ<_>x-U+Y 7>6evBw+]SyUm y&˦ZS{73;w+]@ҩnێ+Mpk~u.Ug M#-v>nSңbt]uș%E\}^U7mjuSvӟw[bNe+M(">ID`DDQ7ܳPrk5'q-\_tfDʶ t LrZ.^a }6s !v4qsjlf.rɍXf-O{[!d\)+7ɈSO15mH?YSl c3j9Opnpt`Dl@22[iBC:TG/EᒑxYvԗ$MƐLIe\u ֩.m^c[ɱ1J 8.;];=3`T1xhIkO?pٝrV(g3a9,һ6VJbO@(R$~J9X.wېW*p:S"ځe&sQ1 e!Id?{**Gi9ݬB%0R)p_-ŭ-b3E F]='x$hbҝ}E_>zl>׻=h2%45$l ogmuЙCgBsܖYê= 9h(FʌO?!H I`a0N5p+G7ޤ "К 6W9)V}W3X!ĂR=?IeP(THx8Eoבx)yvB kmk@ & W4T:+]Q-u?㫕>G+ YN8< +JWT 7^9Z)TfJaVcrļsikr0hHVpieEBSޜpZm+w03kLduـyUpXP2XT,1kb=ϖķ/s44nSB s&eZ 9ڱ\XU5"04o9eLM@5#&kgA=-7ֈ OsOaIDmZ~gXm)Ɨ r̂s4(r,/)~w-dM9 kdBҞMO5h<>UC437Zݛ)R CXj3`iq7LƮ>5ӃU oy^[h9_kdw*(d&_V;aIrwЌQtF@K񡤘G1'Jޤe"d1#bfS U?=B3s ,8'0z4|@?f#9[rb0'-t`7YJ 9~}QD?emdJC>5Pq"\#Od9*:m~mK[_"Gf7撕bAqR̻0OAeOS >ʈ`"F6e-O2t2o31xrx>9Ut,8%%{}|ݿ!c.5ݐJNo?I&s44{yu[!Q0Yڥ\(sCR;֊~}hoJ &a&.qG2oyNP\<8"F<uL[P@nFx"{r/鮴ZZG0w ^lЯ$'S8dF3xwc*huMDR&3$xp˱CaB(q`& jCI&gk\?zO,DSuo$EC)Y3Zr+ .U7--0/v6Y{ 3z(% VLZL={ v3,UTͿLLf1`r$JDTbG[ffY܅/0cLV5[5=lfMVk .{nwbS[a-׻Eݻuˇ[k,2첊>cf#9Qz[V񾨟}-;~z9k3NOaю2/ X)bƔ>׊1 ؤ-O jLoq t?6Ś *ϊ9NoLS-cu('d=2`6pG 0);u X:2M =muP2T_̳%xi$_eV&Ě;Mud$\QHlI7){ͫjmy7RLuۭTSHjK7)ҭ\)Rh9#]*Ιd{;E -ǃ{C jϧHXi9E -{ejه)ҭ2\5E -ǜ{C!AȽ"݊4#گ)Rh9\CY[bVb:E -G{CNrߧH"Ys8)Rh9-]R͆Vt*ʵ{6EXbSrvM{3EmP)"S|5HoV͗;6q,`LhԽ++qk+){Z߿y,ypvQlנܸkGE'pwDY"ۅf龄r]s*{mkn9nVF0Nj,#bXzq,$^B(O'HK[7T]DEv .V\e%HN(5ѯF0 `!q 'O'ASTe܏|f91E[Y.Fc6rœLJû=bvM 3,I+ňI7>ƭe]c>P'F4bxFb\_- ڈa).Ff _eEM:&D5~z M Oֺ'xV;IFR=I-j'!lm8^*7A"ZSbfH1S7 &R{ZP3]~n.krZ ^6dm&+\Hi.VkSh*s3-3UIksl iL.,P̮Xx+oLH}§3g؊ux{ qGLXԓG,yqQj0c2# <1`$C)]h2X0&E޲lj!.FQ!x , 6IcyÐ`s5 |^p)rjX,[ܙZԲY!HʗvEvNh= -2j -5 =Xme𘍐@ i.jV󊋳j,THxqhϟ|} y{/ITHAOo??90gG|(-!_F'\r7 ~zl~#hrwj('ƴ"?`0v?$eP>O0FL a?4ZR|(ʄo;ԃz?P0h.WrH2 % .VkMXI~τr;r;L~p19Jnp -j Pf0|{]\~~XCAyA>ETQ`X}W bVۗ1؂#8,3jb?= /QcA bpz1ڥr~#~?~5,@`X}xf#׳q_(Smܮ%-菽5:tΫ}-ԉ+E9B(USGWѝt#Cûʃ6wl^0;@l#v?vhݺАSZw_»ʃ6wl^t)bvophݺА>YkRkRAp޻[vb]_!%g76?c+d-&h< 8t_~<:Y?]t?̝n8fv/7c2Y>/GG(>` ?on|o"--.[Kq2$z˕n%0V"ã\}٥,e`C*` S@d7( G iJaz1,1W&,> /rc- բr {lAܻ!ߟo/\**w`ޮkǶR<0j2=~'Wv.ӳ%ǸyqP{7 ^4| ͈R;~&ڴOjR|3}4Q:Dʡ(^9RiNJh225R$ gE<.+aQRj a7:BCdNE+n.Z{j &C]<{lPyu''@A`֑m[$hYQPƹ!h'h ϘP%H͜h."ՐndonGm2$?R£ 'D&T0挐@6>mA@o86#^-8$J"3bL$iƃ0]Q1+J r2dri#ɢ z戓s hyCȁްnYYξί\'!bEuO|@JtZLAo?J*̠H|9W CP臚xEઙS5*E{=:7xC,Jfrya%$94|%:N+* &y997)j00JŘ&?vEdo>4=B38=ףI;(')XAoy?4C)0 ITa-#sT򩁊sy"  ;`FnCWim"Gf18F4 kh Gl0Z݆r[ X:E4AfQ'_ %lϰ-pna?JXVFǡ0!RϦ(z'(V͜h #gk\?b'g!_K#Ae*DT?'&"0VqYB[ԢB2є, <= ݄z&ɤf;s'>4`h[׬LpХj)x-C x`-{j*Me_/WP u1O@ ;PU.2E}h5KS3/e.2.K9jw35Ii٦{=8#ߘ}Q4/VNY:U67HIЇp4pT--UrEƱ`LV6IyS/:۬6&V_s&ِb$tR4Effbo=h-C55͘nOs/4P&X$=m?UD8I}}R!`ǟHp?#.eɲdMe܂3bʺX*ndPmȚ<e ,zCW-6U(,V,|tmDy u7BHNeHq/|7Wf ~ 1Βg{cB9q }HķWc[~WQڎ &#EJ1ON;wכ)sVqf\nY:oZۢ^nStȨ1M[]<\Lṭ+nb=!!jQ5_/}7-\b+Hyo >(?9^UTYE\{ ی E$kUw":5nuOlai7v$ 2p%`ZsL_p\=rHZa縱8n0i:N> 'ԊDnsu>\)~^Ҫ^g܎78|ah>+7AmVnl¹rsp =%sm |ETZؓ^+~y=^m%vRvcL/ U׮YY<à`ĝ FďoG<[|g~3+$ nˢsۭ5G6];+"^fWGbtų}?A*`#>(\L;)lBv|_֠STYTHZ Τ' >G##:n_yNtegr$8x"e<TzwU)@SFޖ3=1(=%^wn~twn ' 愘xϊeE_GOpx‡XvAY=։2y'?gM&Q\pg y>-sw1G $Y}dGtsΪWC+uU')gX<-xZ*``33 $ѣIAT'ia.-:5 "V62P3F׽o"6,Cv.+֫Ȗj2ٷd2CǯZ]EL:ې2]5c")Nn{yQeRjqϺ 61s [gkϲ"Mjk)ZJU\KK`($RցTtϰq%]Ws2z&%Q60MMDN[{*i+u*r0 IŗiD-j&k<$#k)US50d̂+O%JCfMu^Ɛ_6秠nއZ6lY0iZ ҧnއQ~T.B֫P]3H\l\AyN'9 WC.ig/J51K" ?~x教׉!D5 zNxlk։5RT5,%BexŐYPttCb*VTz`zbj2S]ٓa8T$~]p{ZOR?ceyUmPg\Iu-&jIlQ@zӝOBwP],yiJj@0D=c`$ۮe?4-yauٓUST2$Udɮ!1 P)̲_$MBz؟\U}* svar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003470327615135772063017722 0ustar rootrootJan 26 22:39:43 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 22:39:43 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.483613 4793 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490558 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490591 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490603 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490612 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490622 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490632 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490643 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490654 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490664 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490674 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490683 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490692 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490700 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490709 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490717 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490725 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490733 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490741 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490749 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490757 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490765 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490773 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490788 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490796 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490804 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490812 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490819 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490827 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490835 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490843 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490851 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490859 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490867 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490876 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490885 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490894 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490903 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490911 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490919 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490927 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490935 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490942 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490950 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490957 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490968 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490978 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490987 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490996 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491004 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491012 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491020 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491032 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491041 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491050 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491058 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491067 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491076 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491084 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491092 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491100 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491108 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491117 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491125 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491132 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491140 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491147 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491155 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491165 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491173 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491180 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491213 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491386 4793 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491404 4793 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491429 4793 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491442 4793 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491454 4793 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491464 4793 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491489 4793 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491502 4793 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491511 4793 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491520 4793 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491530 4793 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491540 4793 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491549 4793 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491559 4793 flags.go:64] FLAG: --cgroup-root="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491568 4793 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491578 4793 flags.go:64] FLAG: --client-ca-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491589 4793 flags.go:64] FLAG: --cloud-config="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491600 4793 flags.go:64] FLAG: --cloud-provider="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491609 4793 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491635 4793 flags.go:64] FLAG: --cluster-domain="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491644 4793 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491654 4793 flags.go:64] FLAG: --config-dir="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491663 4793 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491673 4793 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491685 4793 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491694 4793 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491703 4793 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491713 4793 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491722 4793 flags.go:64] FLAG: --contention-profiling="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491730 4793 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491751 4793 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491760 4793 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491769 4793 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491781 4793 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491790 4793 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491800 4793 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491809 4793 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491818 4793 flags.go:64] FLAG: --enable-server="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491826 4793 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491845 4793 flags.go:64] FLAG: --event-burst="100" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491854 4793 flags.go:64] FLAG: --event-qps="50" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491864 4793 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491884 4793 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491894 4793 flags.go:64] FLAG: --eviction-hard="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491905 4793 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491914 4793 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491923 4793 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491932 4793 flags.go:64] FLAG: --eviction-soft="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491943 4793 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491953 4793 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491961 4793 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491970 4793 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491979 4793 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491988 4793 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491997 4793 flags.go:64] FLAG: --feature-gates="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492008 4793 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492017 4793 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492026 4793 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492035 4793 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492044 4793 flags.go:64] FLAG: --healthz-port="10248" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492053 4793 flags.go:64] FLAG: --help="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492062 4793 flags.go:64] FLAG: --hostname-override="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492072 4793 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492081 4793 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492090 4793 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492099 4793 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492108 4793 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492117 4793 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492126 4793 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492136 4793 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492145 4793 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492154 4793 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492163 4793 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492172 4793 flags.go:64] FLAG: --kube-reserved="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492181 4793 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492216 4793 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492225 4793 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492234 4793 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492255 4793 flags.go:64] FLAG: --lock-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492265 4793 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492274 4793 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492283 4793 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492296 4793 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492305 4793 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492314 4793 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492322 4793 flags.go:64] FLAG: --logging-format="text" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492331 4793 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492341 4793 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492350 4793 flags.go:64] FLAG: --manifest-url="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492359 4793 flags.go:64] FLAG: --manifest-url-header="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492372 4793 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492382 4793 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492393 4793 flags.go:64] FLAG: --max-pods="110" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492403 4793 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492412 4793 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492421 4793 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492430 4793 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492439 4793 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492448 4793 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492457 4793 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492483 4793 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492492 4793 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492501 4793 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492510 4793 flags.go:64] FLAG: --pod-cidr="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492518 4793 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492533 4793 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492542 4793 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492551 4793 flags.go:64] FLAG: --pods-per-core="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492559 4793 flags.go:64] FLAG: --port="10250" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492568 4793 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492577 4793 flags.go:64] FLAG: --provider-id="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492586 4793 flags.go:64] FLAG: --qos-reserved="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492595 4793 flags.go:64] FLAG: --read-only-port="10255" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492604 4793 flags.go:64] FLAG: --register-node="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492631 4793 flags.go:64] FLAG: --register-schedulable="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492640 4793 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492656 4793 flags.go:64] FLAG: --registry-burst="10" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492665 4793 flags.go:64] FLAG: --registry-qps="5" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492674 4793 flags.go:64] FLAG: --reserved-cpus="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492682 4793 flags.go:64] FLAG: --reserved-memory="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492694 4793 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492703 4793 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492712 4793 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492721 4793 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492729 4793 flags.go:64] FLAG: --runonce="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492739 4793 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492748 4793 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492757 4793 flags.go:64] FLAG: --seccomp-default="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492765 4793 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492774 4793 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492783 4793 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492792 4793 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492802 4793 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492811 4793 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492820 4793 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492829 4793 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492838 4793 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492847 4793 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492856 4793 flags.go:64] FLAG: --system-cgroups="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492865 4793 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492878 4793 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492887 4793 flags.go:64] FLAG: --tls-cert-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492895 4793 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492913 4793 flags.go:64] FLAG: --tls-min-version="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492922 4793 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492940 4793 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492949 4793 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492958 4793 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492967 4793 flags.go:64] FLAG: --v="2" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492979 4793 flags.go:64] FLAG: --version="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493002 4793 flags.go:64] FLAG: --vmodule="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493014 4793 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493023 4793 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493280 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493291 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493299 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493306 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493315 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493322 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493331 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493338 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493346 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493354 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493382 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493395 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493405 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493413 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493422 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493429 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493437 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493447 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493456 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493466 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493475 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493483 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493492 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493501 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493514 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493523 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493531 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493539 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493546 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493554 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493563 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493570 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493578 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493587 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493595 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493603 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493612 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493619 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493627 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493639 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493646 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493654 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493663 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493670 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493678 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493685 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493693 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493701 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493709 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493717 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493725 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493733 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493741 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493749 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493757 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493765 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493779 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493789 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493799 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493808 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493817 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493827 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493835 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493844 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493852 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493861 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493870 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493879 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493887 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493896 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493904 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493919 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.506586 4793 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.506624 4793 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506752 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506764 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506773 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506785 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506796 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506806 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506815 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506823 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506831 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506839 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506847 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506856 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506863 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506871 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506879 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506888 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506896 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506905 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506912 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506921 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506928 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506937 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506948 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506959 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506969 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506979 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506988 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506997 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507006 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507017 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507028 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507037 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507047 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507055 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507065 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507075 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507085 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507094 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507103 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507112 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507120 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507128 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507136 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507146 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507154 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507162 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507171 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507180 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507212 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507221 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507229 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507237 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507246 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507254 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507262 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507270 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507278 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507290 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507298 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507306 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507315 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507323 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507331 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507339 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507347 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507356 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507364 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507371 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507379 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507388 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507396 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.507409 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507697 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507711 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507722 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507732 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507741 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507751 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507761 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507769 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507777 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507786 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507794 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507802 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507810 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507818 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507826 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507834 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507842 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507850 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507859 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507867 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507875 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507884 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507893 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507900 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507910 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507918 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507926 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507934 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507942 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507949 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507957 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507965 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507973 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507982 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507989 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507998 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508006 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508014 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508022 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508030 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508038 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508045 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508053 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508062 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508072 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508082 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508091 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508100 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508109 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508118 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508125 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508134 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508142 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508149 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508158 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508165 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508173 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508181 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508214 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508222 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508231 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508238 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508246 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508254 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508262 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508270 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508277 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508285 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508293 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508305 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508314 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.508326 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.509478 4793 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.515406 4793 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.515545 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517319 4793 server.go:997] "Starting client certificate rotation" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517425 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517675 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 20:41:28.784883191 +0000 UTC Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517814 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.541594 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.545718 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.547175 4793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.567994 4793 log.go:25] "Validated CRI v1 runtime API" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.610660 4793 log.go:25] "Validated CRI v1 image API" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.613363 4793 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.618341 4793 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-22-35-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.618389 4793 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.646776 4793 manager.go:217] Machine: {Timestamp:2026-01-26 22:39:45.643179607 +0000 UTC m=+0.631951159 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:601eed9e-4791-49d9-902a-c6f8f21a8d0a BootID:4aaaf2a3-8422-4886-9dc8-d9142aad48d5 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:73:9f:f6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:73:9f:f6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9c:c5:b0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:dd:72:c2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1c:e7:5d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:80:be:23 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:af:ad:b9:f9:3a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:7e:0c:bc:0b:dd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.647677 4793 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.647951 4793 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.648477 4793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.648766 4793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.648824 4793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649358 4793 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649383 4793 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649835 4793 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649871 4793 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.651005 4793 state_mem.go:36] "Initialized new in-memory state store" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.651677 4793 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656561 4793 kubelet.go:418] "Attempting to sync node with API server" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656600 4793 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656718 4793 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656748 4793 kubelet.go:324] "Adding apiserver pod source" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656804 4793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.664366 4793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.665491 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.665481 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.665618 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.665629 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.665686 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.670347 4793 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672137 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672212 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672229 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672247 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672270 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672285 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672299 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672323 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672343 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672358 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672379 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672748 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.674132 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.674990 4793 server.go:1280] "Started kubelet" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.675822 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.677228 4793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.677231 4793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.684758 4793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.685444 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.685523 4793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.686233 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:05:55.002875021 +0000 UTC Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.686487 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687258 4793 server.go:460] "Adding debug handlers to kubelet server" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687555 4793 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687586 4793 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687548 4793 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.689158 4793 factory.go:55] Registering systemd factory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.689345 4793 factory.go:221] Registration of the systemd container factory successfully Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.692951 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.693233 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.693996 4793 factory.go:153] Registering CRI-O factory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694051 4793 factory.go:221] Registration of the crio container factory successfully Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694225 4793 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694273 4793 factory.go:103] Registering Raw factory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694302 4793 manager.go:1196] Started watching for new ooms in manager Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.695144 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.694337 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6907c3dd6655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:39:45.674937941 +0000 UTC m=+0.663709493,LastTimestamp:2026-01-26 22:39:45.674937941 +0000 UTC m=+0.663709493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.696172 4793 manager.go:319] Starting recovery of all containers Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704127 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704211 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704239 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704260 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704281 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704302 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704321 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704341 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704365 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704384 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704406 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704426 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704445 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704469 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704489 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704510 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704528 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704547 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704567 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704586 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704605 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704627 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704649 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704673 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704725 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704745 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704769 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704791 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704814 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704835 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704855 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704874 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704894 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704913 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704964 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704986 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705006 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705025 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705044 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705097 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705117 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705136 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705159 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705179 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705220 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705241 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705261 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705282 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705303 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705322 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705344 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705362 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705391 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707509 4793 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707678 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707805 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707971 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708094 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708237 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708375 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708495 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708624 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708739 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708861 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708974 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709132 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709319 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709442 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709567 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709690 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709803 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709913 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710035 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710151 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710300 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710415 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710530 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710703 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710821 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710937 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711049 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711173 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711344 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711468 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711583 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711704 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711829 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711940 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712061 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712174 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712400 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712516 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712630 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712764 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712882 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712999 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.713449 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.713946 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714168 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714394 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714577 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714763 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714932 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715129 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715514 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715711 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715885 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716080 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716296 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716497 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716668 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716844 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717008 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717279 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717498 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717709 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717873 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718051 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718251 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718445 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718618 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718775 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718933 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719098 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719288 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719480 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719681 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719875 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720035 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720163 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720342 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720473 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720778 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720915 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721121 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721299 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721431 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721552 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721666 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721782 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721941 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722078 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722306 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722435 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722557 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722845 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722993 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.723123 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728243 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728329 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728473 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728536 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728564 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728598 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728622 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728649 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728836 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728863 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728892 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728913 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728941 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728962 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728983 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729009 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729055 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729083 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729103 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729124 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729151 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729175 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729413 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729443 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729469 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729518 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729540 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729569 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729659 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729748 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729799 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729824 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729838 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729852 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729893 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729919 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729935 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729950 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729963 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729980 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729992 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730009 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730021 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730032 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730049 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730062 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730076 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730090 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730100 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730117 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730130 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730141 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730155 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730166 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730181 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730208 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730239 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730255 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730267 4793 reconstruct.go:97] "Volume reconstruction finished" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730277 4793 reconciler.go:26] "Reconciler: start to sync state" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.734395 4793 manager.go:324] Recovery completed Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.744407 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.747898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.747957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.747979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.751648 4793 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.751664 4793 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.751683 4793 state_mem.go:36] "Initialized new in-memory state store" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.755537 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.759480 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.759581 4793 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.759665 4793 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.759827 4793 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.760919 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.761026 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.769777 4793 policy_none.go:49] "None policy: Start" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.771394 4793 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.771454 4793 state_mem.go:35] "Initializing new in-memory state store" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.787327 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.844765 4793 manager.go:334] "Starting Device Plugin manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.844838 4793 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.844864 4793 server.go:79] "Starting device plugin registration server" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.845623 4793 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.845658 4793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.845997 4793 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.846147 4793 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.846238 4793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.860121 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.860327 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.860888 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.861947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862376 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862810 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862899 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.863995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864315 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864596 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865785 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.866008 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.866071 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867414 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867502 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868025 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869473 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869523 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.870671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.870790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.870892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.895913 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934514 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934576 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934662 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935008 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935048 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.945894 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949150 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.950083 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036641 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037003 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037143 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037177 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037184 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037312 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037371 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037366 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037490 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037523 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037597 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037882 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037921 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.150519 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152775 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.153403 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.202345 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.217901 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.242153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.256644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.264447 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.265233 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216 WatchSource:0}: Error finding container 0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216: Status 404 returned error can't find the container with id 0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216 Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.268885 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581 WatchSource:0}: Error finding container 87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581: Status 404 returned error can't find the container with id 87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581 Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.279035 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c WatchSource:0}: Error finding container 65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c: Status 404 returned error can't find the container with id 65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.285632 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8 WatchSource:0}: Error finding container e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8: Status 404 returned error can't find the container with id e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8 Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.288677 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a WatchSource:0}: Error finding container da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a: Status 404 returned error can't find the container with id da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.297936 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.553952 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556800 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.557531 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.677408 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.686383 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:54:00.932888546 +0000 UTC Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.767264 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.769862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.772091 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.773705 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.774958 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581"} Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.896855 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.896992 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.915222 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.915305 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: W0126 22:39:47.076712 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.076828 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.099403 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 26 22:39:47 crc kubenswrapper[4793]: W0126 22:39:47.140073 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.140235 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.357710 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359377 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.359976 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.676685 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.686807 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:28:32.675922172 +0000 UTC Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.739006 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.740508 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.781673 4793 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.781780 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.781777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784399 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae8e9b05a172bfe620de76d3f4da816441ab8c2a36ae76a1fde1c0702d90abd8" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae8e9b05a172bfe620de76d3f4da816441ab8c2a36ae76a1fde1c0702d90abd8"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784611 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.785998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.786032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.786045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.787088 4793 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.787163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.787243 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.788378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.788425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.788443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.789584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.792277 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.792336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.792409 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.793788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.793855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.793883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.797820 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.798652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.798681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.798694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.676982 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.687388 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:22:32.774076427 +0000 UTC Jan 26 22:39:48 crc kubenswrapper[4793]: W0126 22:39:48.698859 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:48 crc kubenswrapper[4793]: E0126 22:39:48.699034 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:48 crc kubenswrapper[4793]: E0126 22:39:48.701124 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.799533 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.799711 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.800727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.800761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.800773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.802919 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="524898d7681667e5eaaf2849d4b40c29cdfe17ea917d6324e27f5373d0415aa3" exitCode=0 Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.802980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"524898d7681667e5eaaf2849d4b40c29cdfe17ea917d6324e27f5373d0415aa3"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803085 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810314 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.811959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.812023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.812042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815573 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.817007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.817138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.817227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820111 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820144 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.960865 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964282 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:48 crc kubenswrapper[4793]: E0126 22:39:48.964934 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:49 crc kubenswrapper[4793]: W0126 22:39:49.351320 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: E0126 22:39:49.351469 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.382108 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.677624 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: W0126 22:39:49.685577 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: E0126 22:39:49.685687 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.688399 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:34:25.857161041 +0000 UTC Jan 26 22:39:49 crc kubenswrapper[4793]: W0126 22:39:49.688796 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: E0126 22:39:49.688946 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.829267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7"} Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.829401 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.835629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.835754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.835784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.837855 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da0cb395873753c08aeba8f2e31e4d4e6c7d0325eaa9f75e0e29941c8b8cc14e" exitCode=0 Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.838074 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.838994 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.839672 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da0cb395873753c08aeba8f2e31e4d4e6c7d0325eaa9f75e0e29941c8b8cc14e"} Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.839742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.839995 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.840747 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.842053 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.842106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.842124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.843148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.843229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.843250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.844116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.844161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.844182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.845292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.845342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.845360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.523286 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.689396 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:36:59.630674377 +0000 UTC Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847437 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847454 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847419 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"541574ffed2413b310e56114b191a3f1eb3ba027205d150337a6cf3cd9b900c5"} Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847595 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"110889ffccbcfde6eab709244c74d186a223f0e8d79f15d118d9efabecf77f91"} Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847634 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b48e7f71fb3c312a54fefa5a1fe3e661ac2b12fb841496d30e63b35c9c0c5e91"} Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847438 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.152231 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.690463 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:44:21.181014808 +0000 UTC Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7818fd042574496a56a2a1d84d74e15bb85c3b0196001fc49bf89087fa6778ec"} Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856333 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856404 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8940542dcbafb8e5d9ade7eadf58c6d2fb344a81af7995b9a4ae7cbd4b20bf0f"} Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.857553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.857597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.857609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.858410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.858459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.858479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.945254 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.024334 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.024542 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.062604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.062684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.062708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.165844 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.167918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.167988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.168008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.168060 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.690667 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:56:05.125569026 +0000 UTC Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.860037 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.860145 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.865908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.865981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.635342 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.635696 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.637746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.637795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.637813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.691289 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:16:16.09618776 +0000 UTC Jan 26 22:39:54 crc kubenswrapper[4793]: I0126 22:39:54.692779 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:08:34.429870291 +0000 UTC Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.142025 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.142363 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.144306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.144382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.144403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.653703 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.653978 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.656114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.656182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.656248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.662840 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.693819 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:37:49.476799072 +0000 UTC Jan 26 22:39:55 crc kubenswrapper[4793]: E0126 22:39:55.861077 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.870551 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.871890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.871964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.871985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:56 crc kubenswrapper[4793]: I0126 22:39:56.636121 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 22:39:56 crc kubenswrapper[4793]: I0126 22:39:56.636348 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 22:39:56 crc kubenswrapper[4793]: I0126 22:39:56.694808 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:23:39.232197862 +0000 UTC Jan 26 22:39:57 crc kubenswrapper[4793]: I0126 22:39:57.695725 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:57:39.222791067 +0000 UTC Jan 26 22:39:58 crc kubenswrapper[4793]: I0126 22:39:58.696513 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:11:45.493709772 +0000 UTC Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.387105 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.387300 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.388447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.388489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.388503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.696902 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:31:03.761270811 +0000 UTC Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.896248 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60310->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.896352 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60310->192.168.126.11:17697: read: connection reset by peer" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.201707 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.201820 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.207943 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.208031 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.457077 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.457378 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.458694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.458730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.458744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.504094 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.533236 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]log ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]etcd ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-apiextensions-informers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/crd-informer-synced ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 22:40:00 crc kubenswrapper[4793]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/bootstrap-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-registration-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]autoregister-completion ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: livez check failed Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.535032 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.697594 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:42:16.54731371 +0000 UTC Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.886677 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.888980 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7" exitCode=255 Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.889125 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.889128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7"} Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.889454 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.891106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.891266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.892538 4793 scope.go:117] "RemoveContainer" containerID="2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.904619 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.698340 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:09:18.196009219 +0000 UTC Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.895224 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.897454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d"} Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.897583 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.897716 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.898812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.898874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.898892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.899248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.899295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.899306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:02 crc kubenswrapper[4793]: I0126 22:40:02.698641 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:52:04.14825634 +0000 UTC Jan 26 22:40:03 crc kubenswrapper[4793]: I0126 22:40:03.699826 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:39:31.737246139 +0000 UTC Jan 26 22:40:04 crc kubenswrapper[4793]: I0126 22:40:04.700174 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:22:49.188016649 +0000 UTC Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.189413 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.192914 4793 trace.go:236] Trace[390218279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:54.935) (total time: 10257ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[390218279]: ---"Objects listed" error: 10257ms (22:40:05.192) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[390218279]: [10.257607251s] [10.257607251s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.192945 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.193008 4793 trace.go:236] Trace[314768884]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:52.732) (total time: 12459ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[314768884]: ---"Objects listed" error: 12459ms (22:40:05.192) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[314768884]: [12.459970214s] [12.459970214s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.193041 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.195369 4793 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.195414 4793 trace.go:236] Trace[1326921677]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:53.659) (total time: 11536ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[1326921677]: ---"Objects listed" error: 11535ms (22:40:05.195) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[1326921677]: [11.536007909s] [11.536007909s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.195445 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.196747 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.199090 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.222146 4793 trace.go:236] Trace[966903809]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:53.834) (total time: 11387ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[966903809]: ---"Objects listed" error: 11386ms (22:40:05.220) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[966903809]: [11.387525821s] [11.387525821s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.222249 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.383126 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.388419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.529343 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.529680 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.535175 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.671266 4793 apiserver.go:52] "Watching apiserver" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.677791 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.678874 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.679383 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679795 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679797 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679851 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.680179 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.680527 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.680957 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.681345 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.681450 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.681572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683303 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683797 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.684245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.688589 4793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698148 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698584 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698890 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699050 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699221 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699387 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699545 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699389 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699698 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699863 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700108 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700135 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700163 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700272 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700348 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700416 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700510 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700584 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700606 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700696 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700716 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700736 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700758 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700780 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700800 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700866 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700886 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700961 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700979 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701034 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701075 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701132 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701174 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701231 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701251 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701289 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701346 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701367 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701385 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701444 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701463 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701503 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701591 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701612 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701652 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701707 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701760 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701798 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701816 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701834 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701850 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701867 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701883 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701898 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701917 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701936 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701968 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708817 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708872 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708904 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708968 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708996 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709112 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709168 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709210 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709243 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709322 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709431 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709459 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709490 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709516 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709682 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709710 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709923 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709959 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709992 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710018 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710045 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710070 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710125 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710153 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710182 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710233 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710264 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710382 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710477 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710505 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710591 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710617 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710644 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710870 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710932 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710958 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701007 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701594 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701707 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702079 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702679 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702859 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.703178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706811 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:11:12.215931821 +0000 UTC Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707018 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707045 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707087 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707538 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707867 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711812 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711011 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711935 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711985 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712008 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708441 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708625 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709029 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709244 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712166 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712211 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712444 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712627 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712994 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713106 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713153 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713247 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713325 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713354 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713431 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713472 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713554 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713631 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713711 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713748 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713784 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713820 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713858 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713900 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713938 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713949 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714058 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714093 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714129 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714395 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714760 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714904 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714937 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714958 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714979 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714999 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715019 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715038 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715058 4793 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715078 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715099 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715120 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715141 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715161 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715208 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715248 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715269 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715296 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715326 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715359 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715380 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715410 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715432 4793 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715452 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715474 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715496 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715512 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715526 4793 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715543 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715560 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715594 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715610 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715626 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715640 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715654 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715669 4793 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715682 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715705 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715723 4793 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715737 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715754 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715772 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715817 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715832 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715849 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713945 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710634 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710758 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710828 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718420 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710998 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714412 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714653 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715913 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716066 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716121 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716470 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716373 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.717264 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.717516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.717557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718592 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718697 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718941 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719281 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719327 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719413 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.719449 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719852 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720121 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720695 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721010 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721090 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721211 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721451 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721688 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721731 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721777 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722277 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722422 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722757 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723278 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723507 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723848 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724523 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724712 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724893 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.725127 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.725976 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.726599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.726804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727240 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727410 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.728097 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.728282 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.728226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.728815 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.728874 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.228849226 +0000 UTC m=+21.217620788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.728903 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.228893737 +0000 UTC m=+21.217665349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.729278 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.732526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.733101 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.733786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.734912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.735161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.735951 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736148 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736211 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736231 4793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736380 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736425 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736440 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736724 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736796 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736964 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.737539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.737558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.738175 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.740549 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.240527411 +0000 UTC m=+21.229299013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.741880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742763 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742813 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.743325 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.746341 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751278 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751376 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751602 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751730 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.251708111 +0000 UTC m=+21.240479623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753527 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753567 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753588 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.753615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753748 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.253725748 +0000 UTC m=+21.242497260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.758452 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.758738 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.764954 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.767138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.767173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.767344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768099 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768221 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768444 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768653 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768672 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768696 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768883 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768722 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768827 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768954 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768980 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.769839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.770272 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.770563 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.770708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771169 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771757 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771926 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.772323 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.772807 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773668 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773747 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.774351 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.774429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.774721 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.772975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775154 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775550 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.777509 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.777793 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.778072 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.779630 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.779773 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.780041 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.780381 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.780804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781272 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781427 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781482 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.783474 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.784496 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.785386 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.788171 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.788409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.789138 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.790990 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.792088 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.795860 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.796794 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.798859 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.799802 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.800505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.801401 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.801996 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.802836 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.804821 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.805635 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.806300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.806483 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.807844 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.808603 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.809598 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.810175 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.811103 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.812383 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.813180 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.813846 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.815036 4793 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.815178 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816669 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816934 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816955 4793 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816973 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816988 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817006 4793 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817019 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817032 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817045 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817057 4793 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817083 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817101 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817112 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817122 4793 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817132 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817142 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817152 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817161 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817170 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817182 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817208 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817218 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817227 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817241 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817253 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817263 4793 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817271 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817281 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817289 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817308 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817316 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817324 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817332 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817341 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817351 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817360 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817368 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817549 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817569 4793 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817585 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817714 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817726 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817735 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817746 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817755 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817786 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817796 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817805 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817843 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817858 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817871 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817883 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817894 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817905 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817916 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817928 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817940 4793 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817941 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817952 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817963 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817975 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817986 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817997 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818008 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818019 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818031 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818043 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818054 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818065 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818076 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818086 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818097 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818109 4793 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818199 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818276 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818348 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818463 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818472 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818502 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818561 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818575 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818590 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818602 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818612 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818622 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818633 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818643 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818654 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818663 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818674 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818684 4793 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818695 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818705 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818716 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818727 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818740 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818750 4793 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818760 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818771 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818783 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818794 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818805 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818820 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818835 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818847 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818860 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818870 4793 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818881 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818893 4793 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818904 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818916 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818928 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818941 4793 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818951 4793 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818961 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818970 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818983 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818991 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819000 4793 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819058 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819068 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819076 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819085 4793 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819094 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819102 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819071 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819114 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819218 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819230 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819240 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819251 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819260 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819269 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819277 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819286 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819295 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819304 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819315 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819345 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819355 4793 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819364 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819372 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819381 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819393 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819401 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819410 4793 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819436 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819446 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819445 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819455 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819498 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819509 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819906 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.821918 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.823094 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.823653 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.824758 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.825447 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.826337 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.826945 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.828452 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.829014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.829066 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.830107 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.830777 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.831686 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.832681 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.833559 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.834006 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.834475 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.835445 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.836031 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.836945 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.837461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.846343 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.856159 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.865095 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.874890 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.890733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.905645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.916636 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.918637 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.919338 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.919827 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.932881 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.998172 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:06 crc kubenswrapper[4793]: W0126 22:40:06.011328 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500 WatchSource:0}: Error finding container a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500: Status 404 returned error can't find the container with id a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500 Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.012127 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:06 crc kubenswrapper[4793]: W0126 22:40:06.021619 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7 WatchSource:0}: Error finding container 5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7: Status 404 returned error can't find the container with id 5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7 Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.047273 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324424 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324462 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324630 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324647 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324658 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324718 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.324704213 +0000 UTC m=+22.313475725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325104 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325129 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325121835 +0000 UTC m=+22.313893337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325179 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325173596 +0000 UTC m=+22.313945108 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325289 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325343 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325352 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325376 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325381 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325373552 +0000 UTC m=+22.314145064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325498 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325469824 +0000 UTC m=+22.314241376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.712782 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:34:55.944418654 +0000 UTC Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.914690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.914750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.914763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ff2b657cc916f0eaa1e3f0c018756b8d04955a2841d233df4fc4b2917266b4e3"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.916350 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.917828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.917863 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.932718 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.943830 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.953642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.964104 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.978578 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.996689 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.011601 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.031765 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.050003 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.064965 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.080743 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.102012 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.117443 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.134861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.146437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.163062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333633 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.333576076 +0000 UTC m=+24.322347588 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333861 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333883 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333903 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333943 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333946 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.333939046 +0000 UTC m=+24.322710558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334083 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.334051469 +0000 UTC m=+24.322823031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334219 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334285 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.334264675 +0000 UTC m=+24.323036247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334386 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334409 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334422 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334467 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.33445753 +0000 UTC m=+24.323229072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.713095 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:48:20.781609183 +0000 UTC Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.760652 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.760676 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.761158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.761322 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.761501 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.762064 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.764565 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.765489 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 22:40:08 crc kubenswrapper[4793]: I0126 22:40:08.713695 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:29:52.7478337 +0000 UTC Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349339 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349515 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349587 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349552612 +0000 UTC m=+28.338324154 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349706 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349770 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349752248 +0000 UTC m=+28.338523790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349779 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349804 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349823 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349874 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349858401 +0000 UTC m=+28.338629943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349930 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349968 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349955593 +0000 UTC m=+28.338727145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350043 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350062 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350076 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350114 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.350102618 +0000 UTC m=+28.338874160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.714892 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:03:53.787173525 +0000 UTC Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.760866 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.760956 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.760989 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.761235 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.761399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.761692 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.929006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce"} Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.947763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:09Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.967363 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:09Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.986780 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:09Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.009169 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.029854 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.052416 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.073913 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.095738 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.715707 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:23:46.196366185 +0000 UTC Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.596856 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599363 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.608750 4793 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.609347 4793 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610853 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.634411 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.639633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.639837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.639969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.640098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.640271 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.657970 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.663627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.663843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.663995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.664132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.664289 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.679357 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684372 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.714757 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.716660 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:29:05.742343004 +0000 UTC Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.744314 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.745026 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.747534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.747754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.747989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.748512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.748876 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.760783 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.760849 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.760902 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.761607 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.761492 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.761823 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852562 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.066912 4793 csr.go:261] certificate signing request csr-x4qqb is approved, waiting to be issued Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.084022 4793 csr.go:257] certificate signing request csr-x4qqb is issued Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.123470 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jwwcw"] Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.123837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.126063 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.126561 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.127559 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.157039 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.203540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68b05073-b99f-4026-986a-0b8a0f7be18a-hosts-file\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.203591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxc8\" (UniqueName: \"kubernetes.io/projected/68b05073-b99f-4026-986a-0b8a0f7be18a-kube-api-access-qrxc8\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.238809 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.285505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.304588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68b05073-b99f-4026-986a-0b8a0f7be18a-hosts-file\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.304625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxc8\" (UniqueName: \"kubernetes.io/projected/68b05073-b99f-4026-986a-0b8a0f7be18a-kube-api-access-qrxc8\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.304890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68b05073-b99f-4026-986a-0b8a0f7be18a-hosts-file\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.316676 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.328216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxc8\" (UniqueName: \"kubernetes.io/projected/68b05073-b99f-4026-986a-0b8a0f7be18a-kube-api-access-qrxc8\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.353731 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.369687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.369962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.370047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.370124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.370205 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.386016 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.407488 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.420050 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.437009 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.445773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: W0126 22:40:12.461397 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b05073_b99f_4026_986a_0b8a0f7be18a.slice/crio-77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b WatchSource:0}: Error finding container 77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b: Status 404 returned error can't find the container with id 77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.472940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.472998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.473009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.473024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.473054 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575647 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.717813 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:15:39.28029654 +0000 UTC Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781279 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884494 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.939654 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jwwcw" event={"ID":"68b05073-b99f-4026-986a-0b8a0f7be18a","Type":"ContainerStarted","Data":"05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.939697 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jwwcw" event={"ID":"68b05073-b99f-4026-986a-0b8a0f7be18a","Type":"ContainerStarted","Data":"77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.967178 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987696 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.992125 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.007909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.020284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.039301 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.062515 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5htjl"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.062930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.064398 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-l5qgq"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.064719 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.065462 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.068275 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.068919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ptkmd"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.069110 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.070007 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.070209 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.071390 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.071451 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.073084 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.073545 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.074400 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.074609 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.076871 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077203 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077200 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077124 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078017 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078165 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078346 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078467 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078671 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078865 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.083242 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.085458 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 22:35:12 +0000 UTC, rotation deadline is 2026-12-16 11:50:49.74805386 +0000 UTC Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.085483 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7765h10m36.662573167s for next certificate rotation Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.096369 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.112925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.127260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.140415 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.151891 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.162296 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.173984 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.186496 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192869 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.198606 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.209220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-conf-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217661 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cnibin\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217814 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-kubelet\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217857 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-hostroot\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-system-cni-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzr2\" (UniqueName: \"kubernetes.io/projected/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-kube-api-access-qxzr2\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218035 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22a78b43-c8a5-48e0-8fe3-89bc7b449391-mcd-auth-proxy-config\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cnibin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-multus-certs\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-etc-kubernetes\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cni-binary-copy\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218223 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-binary-copy\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218238 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9ws\" (UniqueName: \"kubernetes.io/projected/22a78b43-c8a5-48e0-8fe3-89bc7b449391-kube-api-access-ml9ws\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218259 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-system-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218298 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-multus\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-kube-api-access-4df2w\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218371 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22a78b43-c8a5-48e0-8fe3-89bc7b449391-rootfs\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218389 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-bin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218406 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218445 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-os-release\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-daemon-config\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218485 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-socket-dir-parent\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-k8s-cni-cncf-io\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218572 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218587 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218622 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22a78b43-c8a5-48e0-8fe3-89bc7b449391-proxy-tls\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218689 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-os-release\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218719 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-netns\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218748 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218764 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.243823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.263946 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.280418 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295621 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.297777 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.316062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-etc-kubernetes\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319556 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-multus-certs\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cni-binary-copy\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319635 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-binary-copy\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319676 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9ws\" (UniqueName: \"kubernetes.io/projected/22a78b43-c8a5-48e0-8fe3-89bc7b449391-kube-api-access-ml9ws\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319694 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-etc-kubernetes\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319776 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319813 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-system-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319826 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-multus-certs\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319717 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-system-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-multus\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-kube-api-access-4df2w\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22a78b43-c8a5-48e0-8fe3-89bc7b449391-rootfs\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319982 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-bin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-os-release\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-daemon-config\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320088 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320107 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320130 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320153 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-socket-dir-parent\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-k8s-cni-cncf-io\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320208 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22a78b43-c8a5-48e0-8fe3-89bc7b449391-proxy-tls\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320368 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-os-release\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320413 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-netns\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320469 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-conf-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320490 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cnibin\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320551 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-kubelet\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-hostroot\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-system-cni-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzr2\" (UniqueName: \"kubernetes.io/projected/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-kube-api-access-qxzr2\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22a78b43-c8a5-48e0-8fe3-89bc7b449391-mcd-auth-proxy-config\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cnibin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320754 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cnibin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320770 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-binary-copy\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-os-release\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-multus\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321701 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cni-binary-copy\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22a78b43-c8a5-48e0-8fe3-89bc7b449391-rootfs\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321822 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321826 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-socket-dir-parent\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-bin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321904 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-k8s-cni-cncf-io\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322432 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322594 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-netns\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-conf-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cnibin\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322640 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-kubelet\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322691 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-daemon-config\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322755 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-os-release\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323053 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-hostroot\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-system-cni-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22a78b43-c8a5-48e0-8fe3-89bc7b449391-mcd-auth-proxy-config\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.324724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22a78b43-c8a5-48e0-8fe3-89bc7b449391-proxy-tls\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.326305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.337425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9ws\" (UniqueName: \"kubernetes.io/projected/22a78b43-c8a5-48e0-8fe3-89bc7b449391-kube-api-access-ml9ws\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.337569 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.342797 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-kube-api-access-4df2w\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.345331 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.359816 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzr2\" (UniqueName: \"kubernetes.io/projected/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-kube-api-access-qxzr2\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.380087 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.386742 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.396502 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.400690 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: W0126 22:40:13.404830 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e6daa0d_7641_46e1_b9ab_8479c1cd00d6.slice/crio-fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7 WatchSource:0}: Error finding container fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7: Status 404 returned error can't find the container with id fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7 Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.421934 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422103 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422078581 +0000 UTC m=+36.410850093 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422280 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422297 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422361 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422387 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422416 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.42240154 +0000 UTC m=+36.411173072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422426 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422566 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422552234 +0000 UTC m=+36.411323746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422845 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422896 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422907 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422920 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422940 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422927105 +0000 UTC m=+36.411698627 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422958 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422949996 +0000 UTC m=+36.411721518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506305 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609794 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.718144 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:41:37.505424542 +0000 UTC Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.760419 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.760501 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.760915 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.761033 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.761155 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.765649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919761 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.951106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.951229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.953284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.953364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.953386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"2eef6063c20fd0b6e4d20236ef5e071a6fe8bc105d178308dc384b0936022a50"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.955165 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" exitCode=0 Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.955220 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.955257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"972d2d9dd7c3ad94fb7b559afde32870a71392b37532ff0117112547580522c5"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.960838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.960908 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"3edf8f863f7f4b62e357bcbfbaf64c553b35bbb4d8d0ca31b2b304dba3b19de6"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.976077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.992668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.009683 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022916 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.027609 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.041804 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.059418 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.080747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.095716 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.111846 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.125168 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126646 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.145575 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.160258 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.189320 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.210847 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229438 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.231396 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.249901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.260216 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.276604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.289781 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.302293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.315376 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.342747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.400919 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.417529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.430474 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.448119 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543835 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646349 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.719019 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:46:12.606221069 +0000 UTC Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.749014 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852246 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957654 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970780 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.972876 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b" exitCode=0 Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.972996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.993243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.000802 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cjhd7"] Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.001207 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.003123 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.003536 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.003636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.004354 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.016689 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.032145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.047648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062618 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.066183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.087993 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.103646 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.117909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.142223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.150497 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24361290-ee1e-4424-b0f0-27d0b8f013ea-host\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.150628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgrbl\" (UniqueName: \"kubernetes.io/projected/24361290-ee1e-4424-b0f0-27d0b8f013ea-kube-api-access-fgrbl\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.150711 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24361290-ee1e-4424-b0f0-27d0b8f013ea-serviceca\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.162207 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166690 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.181084 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.195013 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.221863 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.236256 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24361290-ee1e-4424-b0f0-27d0b8f013ea-serviceca\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252259 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24361290-ee1e-4424-b0f0-27d0b8f013ea-host\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgrbl\" (UniqueName: \"kubernetes.io/projected/24361290-ee1e-4424-b0f0-27d0b8f013ea-kube-api-access-fgrbl\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252319 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24361290-ee1e-4424-b0f0-27d0b8f013ea-host\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.253650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24361290-ee1e-4424-b0f0-27d0b8f013ea-serviceca\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.266432 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.278815 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgrbl\" (UniqueName: \"kubernetes.io/projected/24361290-ee1e-4424-b0f0-27d0b8f013ea-kube-api-access-fgrbl\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.281035 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.302795 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.328427 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.331942 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.359540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386768 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.389964 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.409144 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.430356 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.442183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.454157 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.466581 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.482288 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489828 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.517995 4793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.520161 4793 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.520197 4793 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.520700 4793 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.521626 4793 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592787 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.696300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.719358 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:41:06.657666969 +0000 UTC Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.760260 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:15 crc kubenswrapper[4793]: E0126 22:40:15.760416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.760510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:15 crc kubenswrapper[4793]: E0126 22:40:15.760776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.760510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:15 crc kubenswrapper[4793]: E0126 22:40:15.760985 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.778247 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.793819 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.808350 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.823498 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.853233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.884782 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.901904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903329 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.921244 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.938489 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.956831 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.975975 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.981622 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.985226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cjhd7" event={"ID":"24361290-ee1e-4424-b0f0-27d0b8f013ea","Type":"ContainerStarted","Data":"5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.985298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cjhd7" event={"ID":"24361290-ee1e-4424-b0f0-27d0b8f013ea","Type":"ContainerStarted","Data":"81a77aa776488768f9f9d8e2ee1e3b47966a0c6dffef06988f167ce3533f71b1"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.989469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.998825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006288 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.018464 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.036860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.051851 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.072730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.093355 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.111102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.114577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.128469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.148640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.169741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.186753 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.201058 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214550 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.222285 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.238925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.258322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.270702 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.283483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.316949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.316994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.317008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.317026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.317038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.335158 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523357 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.530258 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627168 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.720313 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:09:45.993049834 +0000 UTC Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731873 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.794499 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.016403 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.039134 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.059049 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.075484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.087048 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.099616 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.125355 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147729 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.150672 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.171126 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.196762 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.219127 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.237094 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251479 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.253685 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.270317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.291268 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.458619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459333 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.721411 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:53:13.309234562 +0000 UTC Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.760893 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.760987 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.761137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:17 crc kubenswrapper[4793]: E0126 22:40:17.761385 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:17 crc kubenswrapper[4793]: E0126 22:40:17.761557 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:17 crc kubenswrapper[4793]: E0126 22:40:17.761688 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.771022 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874789 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977973 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.002257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.005039 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847" exitCode=0 Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.005102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.036147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.056340 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.075181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.097950 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.114805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.138229 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.154382 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.169677 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183502 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183871 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.220087 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.239960 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.257425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.271633 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.283006 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.286900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.286955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.286977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.287009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.287062 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599253 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.702012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.722594 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:04:17.632366313 +0000 UTC Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805308 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908747 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011899 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.012557 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4" exitCode=0 Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.012599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.033420 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.048243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.062134 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.080648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.106232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115253 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115275 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.128859 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.148129 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.166554 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.184181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.208163 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220447 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.222353 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.235454 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.248985 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.268064 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.429448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.429591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.430021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.430064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.430086 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.537005 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.643915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.643972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.643986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.644015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.644030 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.723079 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:03:13.09412886 +0000 UTC Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753600 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.760493 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.760551 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:19 crc kubenswrapper[4793]: E0126 22:40:19.760656 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:19 crc kubenswrapper[4793]: E0126 22:40:19.760776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.761005 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:19 crc kubenswrapper[4793]: E0126 22:40:19.761333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.794754 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.818283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.844155 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857883 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.865638 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.887226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.903949 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.924881 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.944893 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960723 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.962026 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.983827 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.008848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.022504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.023032 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.023083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.026635 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644" exitCode=0 Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.026699 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.031329 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.049315 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.062968 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064972 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.066147 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.072577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.090551 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.121933 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.141694 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.157461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.168016 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.176148 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.193548 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.210668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.227917 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.244282 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.262962 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271294 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.278220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.289909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.307645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.331152 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.360833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.375002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478597 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.583060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.583658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.583862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.584075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.584368 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.687947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688093 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.723532 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:06:59.528518634 +0000 UTC Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790709 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.995961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996457 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.032422 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706" exitCode=0 Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.032773 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.032975 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.055292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.068786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.085882 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.103090 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.117790 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.132813 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.147409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.160005 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.172523 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.185975 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.201060 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203589 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.212897 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.223945 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.242854 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306221 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444789 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444907 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.444941 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.444922684 +0000 UTC m=+52.433694196 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444965 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.445017 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445064 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445088 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445099 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445130 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445141 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445149 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445157 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.4451421 +0000 UTC m=+52.433913612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445175 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.445167531 +0000 UTC m=+52.433939043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445108 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445299 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.445279544 +0000 UTC m=+52.434051086 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445092 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445360 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.445347426 +0000 UTC m=+52.434118978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511866 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.512022 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719768 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.723781 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:50:33.911967625 +0000 UTC Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.760804 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.760894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.760967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.761006 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.761231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.761346 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822894 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925721 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029342 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031529 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.041463 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0" exitCode=0 Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.041541 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0"} Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.060019 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065868 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.068752 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.083914 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.085570 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.102772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.113436 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.137054 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.143172 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.181619 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.187080 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.187280 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202555 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202939 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.226345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.245813 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.261983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.296106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305791 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.312065 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.324569 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.337450 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.347881 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408335 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510506 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717225 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.724680 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:49:45.213249184 +0000 UTC Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820296 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923616 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026509 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.046638 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/0.log" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.050919 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c" exitCode=1 Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.051047 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.052666 4793 scope.go:117] "RemoveContainer" containerID="6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.057923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.073662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.089991 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.108214 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.123911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129611 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.138672 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.153747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.171317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.195106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 22:40:22.901414 6050 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:40:22.901450 6050 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:22.901483 6050 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:40:22.901503 6050 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:40:22.901510 6050 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:40:22.901531 6050 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:40:22.901559 6050 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 22:40:22.901565 6050 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:22.901588 6050 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:40:22.901606 6050 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:40:22.901653 6050 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:40:22.901691 6050 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 22:40:22.901714 6050 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:40:22.901730 6050 factory.go:656] Stopping watch factory\\\\nI0126 22:40:22.901747 6050 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.211838 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.227653 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233661 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.243283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.261561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.275302 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.288693 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.304701 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.318447 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.331041 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336130 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.341989 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.361333 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 22:40:22.901414 6050 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:40:22.901450 6050 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:22.901483 6050 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:40:22.901503 6050 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:40:22.901510 6050 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:40:22.901531 6050 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:40:22.901559 6050 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 22:40:22.901565 6050 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:22.901588 6050 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:40:22.901606 6050 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:40:22.901653 6050 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:40:22.901691 6050 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 22:40:22.901714 6050 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:40:22.901730 6050 factory.go:656] Stopping watch factory\\\\nI0126 22:40:22.901747 6050 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.375802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.390533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.405614 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.421545 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.437758 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439605 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.458291 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.477895 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.494050 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.515491 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542700 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645777 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.725439 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:47:45.564996665 +0000 UTC Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.762800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:23 crc kubenswrapper[4793]: E0126 22:40:23.762941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.763344 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:23 crc kubenswrapper[4793]: E0126 22:40:23.763428 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.763498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:23 crc kubenswrapper[4793]: E0126 22:40:23.763590 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954915 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058762 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.063546 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.064503 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/0.log" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.069097 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" exitCode=1 Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.069154 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.069247 4793 scope.go:117] "RemoveContainer" containerID="6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.070535 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:24 crc kubenswrapper[4793]: E0126 22:40:24.070836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.091821 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.104833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.123884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.137291 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.155560 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.190651 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 22:40:22.901414 6050 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:40:22.901450 6050 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:22.901483 6050 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:40:22.901503 6050 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:40:22.901510 6050 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:40:22.901531 6050 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:40:22.901559 6050 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 22:40:22.901565 6050 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:22.901588 6050 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:40:22.901606 6050 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:40:22.901653 6050 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:40:22.901691 6050 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 22:40:22.901714 6050 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:40:22.901730 6050 factory.go:656] Stopping watch factory\\\\nI0126 22:40:22.901747 6050 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.208656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.228324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.245749 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.258701 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.263990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264083 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.272390 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.287489 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.299260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.311331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.366931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367091 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470649 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573949 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677492 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.727035 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:17:51.912249207 +0000 UTC Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.780899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.780968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.780987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.781016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.781037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884579 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.075104 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.079903 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.080218 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.102455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.123582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.141109 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.160079 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.191556 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.242931 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.265437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.280846 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.294984 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.308299 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.319612 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.339136 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.362870 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.379542 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.508908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.508987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.509006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.509036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.509061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613647 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.616267 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh"] Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.617055 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.621889 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.622076 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.645953 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.662051 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.683550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.705057 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m59gn\" (UniqueName: \"kubernetes.io/projected/b2178ad7-9b5f-4304-810f-f35026a9a27c-kube-api-access-m59gn\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716693 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.722491 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.727506 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:04:25.068888937 +0000 UTC Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.739128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.762517 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.762650 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.762690 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.762783 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.762839 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.763038 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.770314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.788749 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.805409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m59gn\" (UniqueName: \"kubernetes.io/projected/b2178ad7-9b5f-4304-810f-f35026a9a27c-kube-api-access-m59gn\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.816152 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.816410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824700 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.828929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.832685 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.840823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m59gn\" (UniqueName: \"kubernetes.io/projected/b2178ad7-9b5f-4304-810f-f35026a9a27c-kube-api-access-m59gn\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.854295 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.877444 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.894656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.913681 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.931978 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.942253 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.949215 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: W0126 22:40:25.969240 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2178ad7_9b5f_4304_810f_f35026a9a27c.slice/crio-d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04 WatchSource:0}: Error finding container d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04: Status 404 returned error can't find the container with id d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04 Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.973044 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.993724 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.009722 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.025031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.038043 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.051917 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.063436 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.076361 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.084097 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" event={"ID":"b2178ad7-9b5f-4304-810f-f35026a9a27c","Type":"ContainerStarted","Data":"d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.091726 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.104264 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.115744 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.125220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137984 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.138455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.160941 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.342687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343382 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.447634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.447892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.448214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.448286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.448343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551866 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655425 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.728175 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:38:55.316107526 +0000 UTC Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.748661 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7rl9w"] Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.749436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: E0126 22:40:26.749526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.758945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759083 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.780962 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.799640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.816119 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.829841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.829959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8q5\" (UniqueName: \"kubernetes.io/projected/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-kube-api-access-9d8q5\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.841067 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.860695 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861744 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.873790 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.890077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.908673 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.925204 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.931480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: E0126 22:40:26.931676 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:26 crc kubenswrapper[4793]: E0126 22:40:26.931752 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:27.431727959 +0000 UTC m=+42.420499471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.931673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8q5\" (UniqueName: \"kubernetes.io/projected/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-kube-api-access-9d8q5\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.942309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.957784 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.960682 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8q5\" (UniqueName: \"kubernetes.io/projected/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-kube-api-access-9d8q5\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.983713 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.002775 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.023484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.044845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.064153 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.090649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" event={"ID":"b2178ad7-9b5f-4304-810f-f35026a9a27c","Type":"ContainerStarted","Data":"ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.090717 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" event={"ID":"b2178ad7-9b5f-4304-810f-f35026a9a27c","Type":"ContainerStarted","Data":"263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.113852 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.126810 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.143171 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.165826 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168757 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.186590 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.207539 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.263789 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.272747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.272933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.273088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.273224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.273336 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.301856 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.325919 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.348769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.368250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376809 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.387810 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.406484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.422356 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.438232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.438578 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.438751 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:28.438708295 +0000 UTC m=+43.427480077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.444957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.462337 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480291 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.583895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584918 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.728365 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:34:16.887113603 +0000 UTC Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760320 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760365 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760425 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760619 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760654 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760909 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760958 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791241 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894155 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099713 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203402 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.308013 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411590 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.451060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:28 crc kubenswrapper[4793]: E0126 22:40:28.451356 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:28 crc kubenswrapper[4793]: E0126 22:40:28.451458 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:30.451430805 +0000 UTC m=+45.440202347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514521 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.729421 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:58:30.535059824 +0000 UTC Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032712 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240425 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445974 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.550001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.652988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.730380 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:20:45.584363756 +0000 UTC Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.760564 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.760724 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.760824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.760923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.761435 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.761739 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.761503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.762067 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861460 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965281 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.067952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068104 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174637 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.277967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.480763 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:30 crc kubenswrapper[4793]: E0126 22:40:30.481026 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:30 crc kubenswrapper[4793]: E0126 22:40:30.481164 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:34.481129853 +0000 UTC m=+49.469901405 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485998 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589175 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693057 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693240 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.730772 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:12:55.001686266 +0000 UTC Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796291 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899825 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.003930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004153 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107921 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210919 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.419011 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626889 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.731110 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:23:28.873199057 +0000 UTC Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760145 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.760404 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.760703 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.760915 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.762729 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.937975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041931 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248693 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352480 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455532 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.568638 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.574997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575153 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.598803 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.604543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.604946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.605056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.605096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.605302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.627412 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633676 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.654592 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.683148 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.683411 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685920 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.731908 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:38:30.333163516 +0000 UTC Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.788962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893643 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001937 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105968 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209796 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418871 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521817 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624971 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.732844 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:23:59.533844777 +0000 UTC Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.760781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.761020 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.761155 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.761400 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.761565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.761771 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.761955 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.762117 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834390 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938304 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352500 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456603 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.528707 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:34 crc kubenswrapper[4793]: E0126 22:40:34.528999 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:34 crc kubenswrapper[4793]: E0126 22:40:34.529114 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:42.52908402 +0000 UTC m=+57.517855572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.559912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.559980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.560000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.560033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.560058 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663549 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.733634 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:20:32.53901089 +0000 UTC Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.766841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.766931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.766953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.767000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.767029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870543 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078322 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.491995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.699012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.734792 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:05:17.300560299 +0000 UTC Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.760541 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.760554 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.760715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.760919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.761103 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.761382 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.762753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.763156 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.784105 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.801815 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802147 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.826533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.849448 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.874892 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.895983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.910805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.927323 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.957880 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.972574 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.993784 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.013585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.013963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.034035 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.052682 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.074661 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.091776 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120363 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223408 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.305123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.322287 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.329923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.349500 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.369307 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.405175 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431490 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.452414 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.471000 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.491120 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.509033 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.531709 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.554998 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.575247 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.592896 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.611478 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.634534 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.661064 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.735886 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:34:01.056733053 +0000 UTC Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846772 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.053969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054118 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158308 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.365631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.365726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.365748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.366108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.366134 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462837 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463075 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463179 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.463149094 +0000 UTC m=+84.451920646 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463551 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.463532035 +0000 UTC m=+84.452303577 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463571 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463670 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463697 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463722 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463773 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463844 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463874 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.464465 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.463729171 +0000 UTC m=+84.452500713 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.464533 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.464514702 +0000 UTC m=+84.453286244 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.464589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.464548623 +0000 UTC m=+84.453320165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472479 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.582916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.582990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.583226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.583259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.583300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.736119 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:29:28.062805736 +0000 UTC Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.760680 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.760882 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.761170 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.761135 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.761356 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.761622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.761679 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.762423 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.762932 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791621 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.895837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000805 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104270 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207985 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519242 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.737034 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:09:44.863983883 +0000 UTC Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829587 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.932925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.932998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.933024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.933077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.933105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036319 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.145607 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.149343 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.149827 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.165459 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.179161 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.202643 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.218536 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.230911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243154 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.250425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.276263 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.296331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.314067 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.333292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346224 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.357571 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.382825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.411855 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.433805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.455535 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.472236 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.492796 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554873 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658402 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.737297 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:34:59.325383698 +0000 UTC Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760164 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760342 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760511 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760641 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760745 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762321 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.865896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.865965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.865986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.866031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.866063 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972774 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075982 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.159149 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.160860 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.167928 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" exitCode=1 Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.168017 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.168098 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.169577 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:40:40 crc kubenswrapper[4793]: E0126 22:40:40.170117 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178709 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.193632 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.212820 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.231166 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.265601 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282689 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.291864 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.314106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.342031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.368353 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387266 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.388063 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.405705 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.430277 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.454278 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.471675 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.489392 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.505487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.525688 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.546837 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593901 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697973 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.738497 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:54:07.29838848 +0000 UTC Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800695 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.903813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008258 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.177755 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.183808 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.184112 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215582 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.218517 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.244210 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.264382 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.286561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.304389 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.324968 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.344906 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.363667 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.386775 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.406265 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423734 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.428608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.449703 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.471151 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.493335 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.512148 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.527001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.532122 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.552889 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.629999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630135 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.738880 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:13:30.091838166 +0000 UTC Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760615 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.760823 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.761014 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.761169 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.761295 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836229 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939943 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043774 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147611 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147678 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.250936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.354909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.354987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.355007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.355035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.355089 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458588 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.627558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.627763 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.627877 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:58.627846742 +0000 UTC m=+73.616618294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666520 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.739748 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:08:23.351864576 +0000 UTC Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769490 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.872926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.872992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.873010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.873035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.873054 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.943657 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:42Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949501 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.970363 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:42Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975719 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.998687 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:42Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.004880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.004963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.004983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.005010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.005031 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.028242 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:43Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.061811 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:43Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.062073 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272500 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.375915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.375975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.375993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.376018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.376038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.478956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479082 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685231 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.740151 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:30:53.02785541 +0000 UTC Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.760881 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.760942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.761174 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.761275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761352 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761709 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892386 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.995968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099344 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408518 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613575 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716735 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.741111 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:33:27.985332247 +0000 UTC Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819975 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029658 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132615 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235103 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441720 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544932 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648406 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.742234 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:00:22.574457352 +0000 UTC Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751904 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.759913 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.759946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.759948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.760003 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760265 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760390 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.781419 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.794269 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.804582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.817828 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.840921 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.855952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.856455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.856694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.856918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.857125 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.857320 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.874788 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.900836 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.916860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.934493 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.948833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961218 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.963183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.982257 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.001154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.021497 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:46Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.039468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:46Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.053592 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:46Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064281 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270139 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.374779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.582946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.582990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.583001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.583018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.583032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.685918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686324 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.743000 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:20:58.734382931 +0000 UTC Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790549 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895634 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998659 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307847 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.410978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411078 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514862 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619128 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.722887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.722954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.722973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.723004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.723024 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.743219 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:36:57.106525458 +0000 UTC Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.760961 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.761218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.761320 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761371 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.760971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761538 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.825962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826410 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930317 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033692 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137891 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241159 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345325 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.552860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.552948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.552970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.553411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.553698 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.744250 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:11:23.706678216 +0000 UTC Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072180 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279468 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382306 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.744452 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:48:16.335227437 +0000 UTC Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.760752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.760862 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.760948 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.761052 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.761299 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.761607 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.761901 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.761718 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004326 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211174 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314982 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.522985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.626781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627736 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.745282 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:03:18.79862601 +0000 UTC Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834431 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.041987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042154 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.351939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352053 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557942 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.746079 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:50:29.519091645 +0000 UTC Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760394 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760613 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.760778 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.760930 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.761031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.761136 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762911 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865876 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.970110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175727 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278435 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395758 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603845 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707466 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.746766 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:54:54.1767389 +0000 UTC Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.761649 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:40:52 crc kubenswrapper[4793]: E0126 22:40:52.762102 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810457 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.015950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016087 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120363 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224156 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327147 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387875 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.402943 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407723 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.420772 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.467743 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474262 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.488954 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.494876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.494959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.494986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.495011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.495029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.513062 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.513394 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.514975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515052 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515103 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.617979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618047 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.747707 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:39:50.884406587 +0000 UTC Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760544 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.760749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760816 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.761013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.761102 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.761278 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823834 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445696 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549538 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653375 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.747906 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:44:06.049432709 +0000 UTC Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756721 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756741 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860226 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963728 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170850 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376918 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479768 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582908 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.749122 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:13:15.191226872 +0000 UTC Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.760637 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.760726 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.760823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.760962 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.761131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.761407 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.761646 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.761701 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.774645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.784438 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788666 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.796328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.807720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.819344 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.836137 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.848145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.859205 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.868723 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.885766 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891928 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.907167 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.922290 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.935098 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.945389 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.959486 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.973292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.989505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995441 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199914 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406590 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612378 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715903 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.749347 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:28:43.755005326 +0000 UTC Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818609 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126816 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333920 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538886 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642513 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745524 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.749661 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:42:02.901341193 +0000 UTC Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.761420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.761561 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.761775 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.761858 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.762038 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.762127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.762486 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.762558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.950966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951099 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054048 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054145 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259477 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566311 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.658541 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:58 crc kubenswrapper[4793]: E0126 22:40:58.658778 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:58 crc kubenswrapper[4793]: E0126 22:40:58.658879 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:41:30.658855111 +0000 UTC m=+105.647626623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668687 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.750281 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:35:44.40122025 +0000 UTC Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771852 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183292 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285672 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490983 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593661 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.751011 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:42:05.606659441 +0000 UTC Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.760777 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.760828 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.760969 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.760981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.761162 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.761388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.761563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.761663 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.800828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.801334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.801818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.802412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.802741 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.906686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010657 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114503 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217610 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320803 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527267 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629965 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.733328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734858 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.751950 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:01:56.593402115 +0000 UTC Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941050 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941228 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.044450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.044800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.044927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.045109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.045364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.147895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.148328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.148546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.148746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.149358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357052 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.563884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564170 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667714 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.753304 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:56:26.72991922 +0000 UTC Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.760830 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.760860 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.761011 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.761083 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.761137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.762499 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.762338 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.762647 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769686 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.778385 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.872980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873107 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.975972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184683 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.600881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.600953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.600972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.601001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.601025 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.704995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.754010 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:06:52.398526007 +0000 UTC Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808254 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014580 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117699 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.261302 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/0.log" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.261385 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" containerID="a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08" exitCode=1 Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.261455 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerDied","Data":"a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.262321 4793 scope.go:117] "RemoveContainer" containerID="a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.287553 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.310343 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.332529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.349724 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.384054 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.409062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426532 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.432416 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.451144 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.469174 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.491136 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.510705 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.530640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.551506 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.570383 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.592296 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.610077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.623099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.646824 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737942 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.754212 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:03:47.342537022 +0000 UTC Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.760624 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.760692 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.760771 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.760792 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.760891 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.760994 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.761735 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.762268 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840782 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.916084 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921806 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.941115 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947773 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.965151 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972411 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.993764 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: E0126 22:41:04.022573 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: E0126 22:41:04.022808 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129349 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.267469 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/0.log" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.267530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.283363 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.293955 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.306570 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.319467 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.331162 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335126 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.341613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.353023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.362564 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.374483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.385710 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.394846 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.407822 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.424237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437407 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.441270 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.456138 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.469990 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.492878 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.514508 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644751 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.754418 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:32:44.245373249 +0000 UTC Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.762046 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056858 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159602 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262849 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.272735 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.275368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.275783 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.288911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.300409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.309612 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.323819 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.340615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.355549 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.370454 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.386223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.408161 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.424052 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.437365 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.449373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.463243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.469994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.477934 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.490801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.502909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.515820 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.526635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.755528 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:35:39.101093127 +0000 UTC Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.760838 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.760944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.761200 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.761261 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761331 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761453 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761230 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761591 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.775113 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.783610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.783669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.783681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.785323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.785347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.792631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.805438 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.825469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.845532 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.860341 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.872001 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887300 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.900537 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.916092 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.935626 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.957684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.972777 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.984885 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.989943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.989985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.990002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.990022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.990036 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.999814 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.011145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.027673 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.044365 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093117 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195907 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.281147 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.282171 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.285515 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" exitCode=1 Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.285571 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.285628 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.286513 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:06 crc kubenswrapper[4793]: E0126 22:41:06.286767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.304598 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.329080 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.344605 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.365408 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.391323 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:05Z\\\",\\\"message\\\":\\\" 6842 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 22:41:05.732808 6842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:41:05.732874 6842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:41:05.732940 6842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:41:05.733009 6842 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:41:05.733095 6842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:41:05.733208 6842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:41:05.733305 6842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:41:05.733430 6842 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:41:05.733567 6842 factory.go:656] Stopping watch factory\\\\nI0126 22:41:05.733642 6842 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:41:05.733026 6842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:41:05.733162 6842 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 22:41:05.733274 6842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:41:05.733521 6842 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:41:05.733937 6842 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.414473 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.434533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.450585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.470246 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.486260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.501867 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505362 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.518831 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.535338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.550839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.570237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.589825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.614599 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623986 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.642223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728369 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.755999 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:32:19.230229567 +0000 UTC Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936496 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.039985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.142689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143316 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.291617 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.296752 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.296968 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.314755 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.334320 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.359914 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.379423 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.396999 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.417062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.431884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.444496 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.459077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472470 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472780 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.491417 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.505663 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.520366 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.534235 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.545284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.564233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:05Z\\\",\\\"message\\\":\\\" 6842 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 22:41:05.732808 6842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:41:05.732874 6842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:41:05.732940 6842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:41:05.733009 6842 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:41:05.733095 6842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:41:05.733208 6842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:41:05.733305 6842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:41:05.733430 6842 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:41:05.733567 6842 factory.go:656] Stopping watch factory\\\\nI0126 22:41:05.733642 6842 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:41:05.733026 6842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:41:05.733162 6842 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 22:41:05.733274 6842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:41:05.733521 6842 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:41:05.733937 6842 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576711 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.581579 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.599313 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.756935 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:51:25.382563946 +0000 UTC Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760384 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760520 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760646 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760717 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787230 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890769 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993491 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.095997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301922 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404050 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609450 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712373 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.758025 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:20:16.09684163 +0000 UTC Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021645 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124959 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227399 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329784 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433525 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484499 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484683 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484789 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.484717754 +0000 UTC m=+148.473489296 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484799 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484887 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.484864168 +0000 UTC m=+148.473635710 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484943 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484983 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485050 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485089 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485122 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485147 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485097 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.485084064 +0000 UTC m=+148.473855616 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485266 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485288 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.485263829 +0000 UTC m=+148.474035381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485579 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.485389102 +0000 UTC m=+148.474160644 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536382 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639392 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742293 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.758579 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:24:24.655694967 +0000 UTC Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.759947 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.759971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.760034 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760077 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.760094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760300 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760352 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.844690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.844947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.845096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.845171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.845255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054467 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373497 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477862 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581249 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685359 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.759280 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:05:00.908410418 +0000 UTC Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.785044 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.891758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.996801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.997720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.997881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.998035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.998165 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.204632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205061 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205610 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308357 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411365 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514427 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618466 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723355 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760159 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:05:55.384045507 +0000 UTC Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760432 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760529 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.760599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760646 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.760876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760920 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.761012 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.761087 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930498 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034258 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.136963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137136 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241228 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448267 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551931 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759714 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.760549 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:32:21.430864379 +0000 UTC Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070502 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174571 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381632 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484859 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.588019 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691581 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.759945 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.759992 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760129 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.760218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.760277 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760564 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760676 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.760881 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:24:51.983854333 +0000 UTC Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794419 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897931 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.001559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.002276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.002665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.002981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.003318 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106699 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209885 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246733 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.283092 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk"] Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.283777 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.285936 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.287354 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.289694 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.289850 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.327562 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.327534678 podStartE2EDuration="4.327534678s" podCreationTimestamp="2026-01-26 22:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.327342773 +0000 UTC m=+89.316114285" watchObservedRunningTime="2026-01-26 22:41:14.327534678 +0000 UTC m=+89.316306200" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.365638 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cjhd7" podStartSLOduration=62.36560229 podStartE2EDuration="1m2.36560229s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.363600546 +0000 UTC m=+89.352372098" watchObservedRunningTime="2026-01-26 22:41:14.36560229 +0000 UTC m=+89.354373812" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.397816 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.397795542 podStartE2EDuration="1m9.397795542s" podCreationTimestamp="2026-01-26 22:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.396746144 +0000 UTC m=+89.385517656" watchObservedRunningTime="2026-01-26 22:41:14.397795542 +0000 UTC m=+89.386567054" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.411162 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.411123434 podStartE2EDuration="38.411123434s" podCreationTimestamp="2026-01-26 22:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.410050314 +0000 UTC m=+89.398821836" watchObservedRunningTime="2026-01-26 22:41:14.411123434 +0000 UTC m=+89.399894996" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.424458 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jwwcw" podStartSLOduration=62.424427744 podStartE2EDuration="1m2.424427744s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.424345882 +0000 UTC m=+89.413117394" watchObservedRunningTime="2026-01-26 22:41:14.424427744 +0000 UTC m=+89.413199276" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.443399 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-l5qgq" podStartSLOduration=62.443362427 podStartE2EDuration="1m2.443362427s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.442657708 +0000 UTC m=+89.431429240" watchObservedRunningTime="2026-01-26 22:41:14.443362427 +0000 UTC m=+89.432133979" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.444996 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445052 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445073 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445113 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445256 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.527264 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podStartSLOduration=62.52723858 podStartE2EDuration="1m2.52723858s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.502596523 +0000 UTC m=+89.491368035" watchObservedRunningTime="2026-01-26 22:41:14.52723858 +0000 UTC m=+89.516010102" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.545921 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.545976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546031 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546079 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.555414 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" podStartSLOduration=62.555386063 podStartE2EDuration="1m2.555386063s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.554604962 +0000 UTC m=+89.543376524" watchObservedRunningTime="2026-01-26 22:41:14.555386063 +0000 UTC m=+89.544157595" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.561753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.573442 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.573419732 podStartE2EDuration="1m9.573419732s" podCreationTimestamp="2026-01-26 22:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.573077363 +0000 UTC m=+89.561848885" watchObservedRunningTime="2026-01-26 22:41:14.573419732 +0000 UTC m=+89.562191254" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.578117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.590128 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.590099984 podStartE2EDuration="13.590099984s" podCreationTimestamp="2026-01-26 22:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.589274612 +0000 UTC m=+89.578046134" watchObservedRunningTime="2026-01-26 22:41:14.590099984 +0000 UTC m=+89.578871506" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.604870 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.657976 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" podStartSLOduration=62.657945833 podStartE2EDuration="1m2.657945833s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.657624914 +0000 UTC m=+89.646396436" watchObservedRunningTime="2026-01-26 22:41:14.657945833 +0000 UTC m=+89.646717385" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.761354 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:12:01.175684834 +0000 UTC Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.761794 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.771438 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.326278 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" event={"ID":"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2","Type":"ContainerStarted","Data":"d65fc65845d91cee768e8e6450b6561d5d0b511d16ad27a6a9f71f69756281d5"} Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.326374 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" event={"ID":"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2","Type":"ContainerStarted","Data":"8eabc53d55bdd22b3981705629ad2b1a8d86f0ef8864e932f2dddc83049ecb2f"} Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.349817 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" podStartSLOduration=63.349792452 podStartE2EDuration="1m3.349792452s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:15.347992904 +0000 UTC m=+90.336764466" watchObservedRunningTime="2026-01-26 22:41:15.349792452 +0000 UTC m=+90.338563994" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.760468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.760525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.760573 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763226 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.763261 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763395 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.760592 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.760729 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.760975 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.761323 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761586 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761679 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761809 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:18 crc kubenswrapper[4793]: I0126 22:41:18.761851 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:18 crc kubenswrapper[4793]: E0126 22:41:18.762165 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760248 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760406 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760409 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.760571 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.760872 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.761324 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.761472 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760549 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760560 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.760662 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.760831 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.760949 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.761326 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761093 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760666 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761334 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761506 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761604 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760317 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760362 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760397 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.760533 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.762624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.762706 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.763168 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.760280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.761503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.761745 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.761774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.761918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.762179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.762081 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.762632 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.760513 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.760561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.760720 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.761002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.761052 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.761138 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.761419 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.761654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:30 crc kubenswrapper[4793]: I0126 22:41:30.746082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:30 crc kubenswrapper[4793]: E0126 22:41:30.746424 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:41:30 crc kubenswrapper[4793]: E0126 22:41:30.746550 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:42:34.746516665 +0000 UTC m=+169.735288207 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.760750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.760833 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.760750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.760960 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.761047 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.761240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.761410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.761475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:32 crc kubenswrapper[4793]: I0126 22:41:32.761653 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:32 crc kubenswrapper[4793]: E0126 22:41:32.761919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760311 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760442 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760328 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760446 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760623 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760497 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760866 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.760695 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.760754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.761982 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.762005 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.762061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.762269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.762325 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.762482 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.760873 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761135 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.761510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.761568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761619 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.761510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761889 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761769 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760264 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.760654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.760960 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.761032 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.761131 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.760792 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.760902 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.760948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761233 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.761271 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761480 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761585 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.760470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.760578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.760636 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.760803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.760996 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.761032 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.761337 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.761695 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.762442 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.762644 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.715254 4793 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760523 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762428 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762549 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762657 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762782 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.883490 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760232 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760305 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760534 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.460669 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461453 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/0.log" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461519 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" exitCode=1 Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerDied","Data":"2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298"} Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461614 4793 scope.go:117] "RemoveContainer" containerID="a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.464003 4793 scope.go:117] "RemoveContainer" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.464821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-l5qgq_openshift-multus(2e6daa0d-7641-46e1-b9ab-8479c1cd00d6)\"" pod="openshift-multus/multus-l5qgq" podUID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760522 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760538 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761136 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760575 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761443 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761565 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:50 crc kubenswrapper[4793]: I0126 22:41:50.468298 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:41:50 crc kubenswrapper[4793]: E0126 22:41:50.884775 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.760968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.761043 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.761293 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.761326 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.761458 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.761645 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.761803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.762031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.760762 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.761003 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.761164 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.761365 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.760385 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.760489 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.762339 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.762332 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762519 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762635 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762814 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.886225 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760428 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.760618 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760695 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760729 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760785 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.761144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.761347 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.761487 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:58 crc kubenswrapper[4793]: I0126 22:41:58.761777 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.503617 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.508047 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.508525 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.538896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podStartSLOduration=107.538877148 podStartE2EDuration="1m47.538877148s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:59.538705292 +0000 UTC m=+134.527476824" watchObservedRunningTime="2026-01-26 22:41:59.538877148 +0000 UTC m=+134.527648660" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.697319 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7rl9w"] Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.697452 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.697560 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.760002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.760002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.760030 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.760166 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.760467 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.760518 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:00 crc kubenswrapper[4793]: E0126 22:42:00.887725 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.760971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.761044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.761074 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761234 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.761299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761455 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761557 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761666 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760098 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760111 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.760526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.760754 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.760920 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.761178 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:04 crc kubenswrapper[4793]: I0126 22:42:04.760863 4793 scope.go:117] "RemoveContainer" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.535066 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.535608 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382"} Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.761433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.761474 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.761556 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.762771 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.762793 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.762948 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.763059 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.763127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.889237 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760763 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760916 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760997 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761176 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761367 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761576 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761699 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.759991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.760093 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.760139 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.760161 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760318 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760544 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760848 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.760237 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.760259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.760947 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.761231 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.763174 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.764114 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765009 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765067 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765221 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.542142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:13 crc kubenswrapper[4793]: E0126 22:42:13.542397 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:44:15.542351002 +0000 UTC m=+270.531122554 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.542973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.543031 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.543085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.543125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.544820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.551375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.552234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.553336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.586817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.600712 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.624930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:13 crc kubenswrapper[4793]: W0126 22:42:13.923783 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98 WatchSource:0}: Error finding container 6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98: Status 404 returned error can't find the container with id 6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98 Jan 26 22:42:14 crc kubenswrapper[4793]: W0126 22:42:14.191136 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef WatchSource:0}: Error finding container 9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef: Status 404 returned error can't find the container with id 9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef Jan 26 22:42:14 crc kubenswrapper[4793]: W0126 22:42:14.211087 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b WatchSource:0}: Error finding container 43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b: Status 404 returned error can't find the container with id 43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.575171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2f088f845299118c92b9bc33deda902cd2ec4f845f6b4347173c9f161e6966dc"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.576026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.579169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"aaf52168a53ffd309e321091c60bc0d60a2322b0deadea08631dbd221d5731fc"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.580553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.582649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3df8a6d34f47f8f5fa901d76254e2e15932e69736a3fcdbbc01d5dc978a0f917"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.582692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.582955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.797857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.856075 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nd4pl"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.857224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.864268 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.864274 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.873164 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.873699 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.878382 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.885029 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.886438 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.886707 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896378 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896720 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896849 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.897818 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.899894 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.900789 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.901720 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhc46"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.904785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.905059 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.905399 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.907201 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.907534 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.908527 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.908977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.915594 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.918019 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.918858 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.919585 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.920037 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.920730 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.921587 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.923010 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.923637 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.923891 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.924018 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.924371 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.924750 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925018 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925284 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925598 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925859 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.926097 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.926465 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.927343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.927469 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.927816 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.928343 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g9vhm"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.928731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.929347 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931173 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931523 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931735 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931941 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932130 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932354 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932551 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932742 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932587 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933136 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933439 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933371 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933910 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934054 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934130 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934242 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934302 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934840 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935066 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935168 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935097 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935386 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935434 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935635 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935715 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935645 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.936335 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.936507 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.938963 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939225 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939566 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939696 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939847 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939926 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940039 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940091 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940559 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxxfj"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940807 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941011 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941428 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941834 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941896 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.953535 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.954771 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.957685 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2fq8c"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.972182 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.975516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.975782 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.975872 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.976036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977389 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977807 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977970 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978059 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978270 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978857 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.979705 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.979931 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.979980 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.980148 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.980515 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.996144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.996406 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.998404 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-node-pullsecrets\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999136 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-audit\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-serving-cert\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999217 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmmw\" (UniqueName: \"kubernetes.io/projected/1b75b5dd-f846-44d1-b751-5d8241200a89-kube-api-access-8dmmw\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwn7v\" (UniqueName: \"kubernetes.io/projected/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-kube-api-access-hwn7v\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-auth-proxy-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999301 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-serving-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999383 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-policies\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-client\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtkq\" (UniqueName: \"kubernetes.io/projected/49db6ef3-ecbb-44ba-90bd-6b4fad356374-kube-api-access-9wtkq\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999445 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-encryption-config\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-images\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999478 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49db6ef3-ecbb-44ba-90bd-6b4fad356374-machine-approver-tls\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999513 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-client\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999528 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999589 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-config\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999609 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-dir\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-serving-cert\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-audit-dir\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999688 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999707 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-image-import-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-encryption-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpnk\" (UniqueName: \"kubernetes.io/projected/108460a3-822d-405c-a0fa-cdd12ea4123f-kube-api-access-fvpnk\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999825 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.000489 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.000863 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.002327 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.004156 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.005083 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.005409 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.006448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.007716 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.009196 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.010539 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.011172 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.011537 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.013170 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.018255 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.018554 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019063 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019428 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019727 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2l78j"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019852 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020134 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020392 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020545 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020821 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.022170 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.022431 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.024502 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.026633 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.027496 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.030575 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.031018 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.031335 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.031540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.045310 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xpkxl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.045613 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.045948 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kszs"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.046492 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.046955 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.047654 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.047935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.048374 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vn8zr"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.048974 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.049146 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pj7sl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.049727 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.066380 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.068373 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.069618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.070117 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.070519 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.071944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.074269 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.075757 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.077754 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.080937 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.081128 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhc46"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.086077 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.088959 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.089307 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.090618 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nd4pl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.090781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.091023 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.093241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.093843 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.094707 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.095913 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.096862 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.098343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.099850 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100257 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100317 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100342 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvbn\" (UniqueName: \"kubernetes.io/projected/614c35d6-ed0a-4b6d-9241-6df532fa9528-kube-api-access-sfvbn\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2f7z\" (UniqueName: \"kubernetes.io/projected/a7012682-3c72-4541-ae5b-5c1522508f39-kube-api-access-l2f7z\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms66w\" (UniqueName: \"kubernetes.io/projected/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-kube-api-access-ms66w\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-config\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-dir\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100490 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7012682-3c72-4541-ae5b-5c1522508f39-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100511 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82sc\" (UniqueName: \"kubernetes.io/projected/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-kube-api-access-p82sc\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100536 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrwc\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-kube-api-access-mfrwc\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f225855-c898-4519-96fb-c0556fb46513-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8jh\" (UniqueName: \"kubernetes.io/projected/57535671-8438-44b0-95f3-24679160fb8d-kube-api-access-vn8jh\") pod \"downloads-7954f5f757-2l78j\" (UID: \"57535671-8438-44b0-95f3-24679160fb8d\") " pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-serving-cert\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100646 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-audit-dir\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100671 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-config\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-config\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100748 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-image-import-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100819 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-encryption-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100884 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-config\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100929 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f111a8c3-da3b-48f7-aad2-693c62b93659-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100975 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f111a8c3-da3b-48f7-aad2-693c62b93659-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100997 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsfh\" (UniqueName: \"kubernetes.io/projected/36b0f3df-e65a-41d3-b718-916bd868f437-kube-api-access-czsfh\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b0f3df-e65a-41d3-b718-916bd868f437-serving-cert\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfcn\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-kube-api-access-rnfcn\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101106 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f225855-c898-4519-96fb-c0556fb46513-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpnk\" (UniqueName: \"kubernetes.io/projected/108460a3-822d-405c-a0fa-cdd12ea4123f-kube-api-access-fvpnk\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101154 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101208 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101234 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f111a8c3-da3b-48f7-aad2-693c62b93659-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101374 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101396 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101419 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101474 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101489 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101515 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101541 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-node-pullsecrets\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-audit\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-serving-cert\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmmw\" (UniqueName: \"kubernetes.io/projected/1b75b5dd-f846-44d1-b751-5d8241200a89-kube-api-access-8dmmw\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101644 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101673 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f225855-c898-4519-96fb-c0556fb46513-config\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102044 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-dir\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102382 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-audit-dir\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102470 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-node-pullsecrets\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.103851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104077 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-config\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-audit\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104555 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwn7v\" (UniqueName: \"kubernetes.io/projected/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-kube-api-access-hwn7v\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-serving-cert\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-service-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-image-import-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104655 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-auth-proxy-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f59h\" (UniqueName: \"kubernetes.io/projected/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-kube-api-access-4f59h\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104713 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbn82\" (UniqueName: \"kubernetes.io/projected/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-kube-api-access-mbn82\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104815 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/614c35d6-ed0a-4b6d-9241-6df532fa9528-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104974 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105050 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-serving-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105171 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-policies\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-client\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-config\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105322 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtkq\" (UniqueName: \"kubernetes.io/projected/49db6ef3-ecbb-44ba-90bd-6b4fad356374-kube-api-access-9wtkq\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-encryption-config\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-auth-proxy-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-images\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105549 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105566 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f869ac-7928-4749-9ba7-04ec01b48bc0-serving-cert\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105583 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7012682-3c72-4541-ae5b-5c1522508f39-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105614 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49db6ef3-ecbb-44ba-90bd-6b4fad356374-machine-approver-tls\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105655 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-trusted-ca\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nth\" (UniqueName: \"kubernetes.io/projected/f6f869ac-7928-4749-9ba7-04ec01b48bc0-kube-api-access-72nth\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-client\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-client\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106137 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106475 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-images\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106575 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-serving-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.107407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.108232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-serving-cert\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.109520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-policies\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.109769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-client\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.109834 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-encryption-config\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49db6ef3-ecbb-44ba-90bd-6b4fad356374-machine-approver-tls\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110355 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-serving-cert\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110616 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110787 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-client\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.111149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.111271 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jv28b"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.111291 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-encryption-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.112610 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.113089 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.115139 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g9vhm"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.116811 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2fq8c"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.117243 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.118078 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.119251 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.119459 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-65h9z"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.120084 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.120731 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.123624 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.125881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.127172 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.128501 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.129811 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2l78j"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.132732 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pj7sl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.133120 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.135274 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.137209 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.138829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxxfj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.139836 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.141087 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.143265 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.145442 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kszs"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.146693 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.148523 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.150030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.153606 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.155157 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rz5xt"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.156172 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.156456 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfbz"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.157646 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.158030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.159314 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.161216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vn8zr"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.162381 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.163504 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.164892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.166198 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.167314 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rz5xt"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.168757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.178988 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.180935 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jv28b"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.183609 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfbz"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.199945 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-config\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206549 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206617 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-config\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f869ac-7928-4749-9ba7-04ec01b48bc0-serving-cert\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7012682-3c72-4541-ae5b-5c1522508f39-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-trusted-ca\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72nth\" (UniqueName: \"kubernetes.io/projected/f6f869ac-7928-4749-9ba7-04ec01b48bc0-kube-api-access-72nth\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-client\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208085 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208202 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvbn\" (UniqueName: \"kubernetes.io/projected/614c35d6-ed0a-4b6d-9241-6df532fa9528-kube-api-access-sfvbn\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2f7z\" (UniqueName: \"kubernetes.io/projected/a7012682-3c72-4541-ae5b-5c1522508f39-kube-api-access-l2f7z\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms66w\" (UniqueName: \"kubernetes.io/projected/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-kube-api-access-ms66w\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7012682-3c72-4541-ae5b-5c1522508f39-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82sc\" (UniqueName: \"kubernetes.io/projected/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-kube-api-access-p82sc\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208513 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrwc\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-kube-api-access-mfrwc\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208531 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f225855-c898-4519-96fb-c0556fb46513-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208615 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8jh\" (UniqueName: \"kubernetes.io/projected/57535671-8438-44b0-95f3-24679160fb8d-kube-api-access-vn8jh\") pod \"downloads-7954f5f757-2l78j\" (UID: \"57535671-8438-44b0-95f3-24679160fb8d\") " pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208639 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208694 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-config\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208711 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-config\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208757 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-config\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f111a8c3-da3b-48f7-aad2-693c62b93659-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsfh\" (UniqueName: \"kubernetes.io/projected/36b0f3df-e65a-41d3-b718-916bd868f437-kube-api-access-czsfh\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f111a8c3-da3b-48f7-aad2-693c62b93659-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b0f3df-e65a-41d3-b718-916bd868f437-serving-cert\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209117 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnfcn\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-kube-api-access-rnfcn\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209153 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209171 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f225855-c898-4519-96fb-c0556fb46513-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209208 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6g7\" (UniqueName: \"kubernetes.io/projected/1c808b47-7d86-4456-ae60-cb83d2a58262-kube-api-access-xs6g7\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209386 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f111a8c3-da3b-48f7-aad2-693c62b93659-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209517 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7012682-3c72-4541-ae5b-5c1522508f39-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209812 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209876 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f225855-c898-4519-96fb-c0556fb46513-config\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209902 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqs7g\" (UniqueName: \"kubernetes.io/projected/81bdaed4-2088-404f-a937-9a682635b5ab-kube-api-access-kqs7g\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-config\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210118 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210477 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209942 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-serving-cert\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-service-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210922 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-images\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f59h\" (UniqueName: \"kubernetes.io/projected/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-kube-api-access-4f59h\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210965 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c808b47-7d86-4456-ae60-cb83d2a58262-proxy-tls\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbn82\" (UniqueName: \"kubernetes.io/projected/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-kube-api-access-mbn82\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211048 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211066 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2lx\" (UniqueName: \"kubernetes.io/projected/4f027876-8332-446c-9ea2-29d38ba7fcfc-kube-api-access-pq2lx\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/614c35d6-ed0a-4b6d-9241-6df532fa9528-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211131 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211150 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211178 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211214 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-config\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211380 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212084 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-trusted-ca\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212343 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212514 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212900 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.213489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7012682-3c72-4541-ae5b-5c1522508f39-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214048 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214102 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214504 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.215517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216014 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216040 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216177 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-service-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216249 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216343 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216537 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b0f3df-e65a-41d3-b718-916bd868f437-serving-cert\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216683 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/614c35d6-ed0a-4b6d-9241-6df532fa9528-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216691 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.217585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-client\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.218226 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f869ac-7928-4749-9ba7-04ec01b48bc0-serving-cert\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.218500 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f225855-c898-4519-96fb-c0556fb46513-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.218833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-serving-cert\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219095 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219564 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219738 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219854 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219938 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.220323 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.220693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.221234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.221884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f225855-c898-4519-96fb-c0556fb46513-config\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.227856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.239205 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.260118 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.279624 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.299024 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.313534 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.313943 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs6g7\" (UniqueName: \"kubernetes.io/projected/1c808b47-7d86-4456-ae60-cb83d2a58262-kube-api-access-xs6g7\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqs7g\" (UniqueName: \"kubernetes.io/projected/81bdaed4-2088-404f-a937-9a682635b5ab-kube-api-access-kqs7g\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314150 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314399 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-images\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c808b47-7d86-4456-ae60-cb83d2a58262-proxy-tls\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314617 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314719 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2lx\" (UniqueName: \"kubernetes.io/projected/4f027876-8332-446c-9ea2-29d38ba7fcfc-kube-api-access-pq2lx\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314852 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.319659 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.340386 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.350155 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.359853 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.369709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.392604 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.400414 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.402457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.419577 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.440348 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.445943 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.459725 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.461180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-config\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.480510 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.498857 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.508378 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.519617 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.522762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.539960 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.559422 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.563407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f111a8c3-da3b-48f7-aad2-693c62b93659-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.580391 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.602818 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.611361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f111a8c3-da3b-48f7-aad2-693c62b93659-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.619604 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.626127 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-images\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.639281 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.659470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.670929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c808b47-7d86-4456-ae60-cb83d2a58262-proxy-tls\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.680677 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.700875 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.721024 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.740127 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.760723 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.780322 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.800356 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.820671 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.838972 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.859272 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.881179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.900233 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.919486 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.941627 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.960719 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.980427 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.002723 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.019453 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.039501 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.055409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.057730 4793 request.go:700] Waited for 1.008629303s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.060911 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.080632 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.099952 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.120279 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.142004 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.161289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.181225 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.200868 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.220459 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.240089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.260396 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.279953 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.301173 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.314811 4793 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.314905 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config podName:81bdaed4-2088-404f-a937-9a682635b5ab nodeName:}" failed. No retries permitted until 2026-01-26 22:42:16.814886213 +0000 UTC m=+151.803657725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config") pod "service-ca-operator-777779d784-bcdkj" (UID: "81bdaed4-2088-404f-a937-9a682635b5ab") : failed to sync configmap cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.314994 4793 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.315059 4793 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.315093 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert podName:4f027876-8332-446c-9ea2-29d38ba7fcfc nodeName:}" failed. No retries permitted until 2026-01-26 22:42:16.815060839 +0000 UTC m=+151.803832381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert") pod "catalog-operator-68c6474976-r4dq5" (UID: "4f027876-8332-446c-9ea2-29d38ba7fcfc") : failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.315129 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert podName:81bdaed4-2088-404f-a937-9a682635b5ab nodeName:}" failed. No retries permitted until 2026-01-26 22:42:16.81511071 +0000 UTC m=+151.803882252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert") pod "service-ca-operator-777779d784-bcdkj" (UID: "81bdaed4-2088-404f-a937-9a682635b5ab") : failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.320841 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.340011 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.360805 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.379437 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.399776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.419689 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.439333 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.462046 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.480550 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.499763 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.519232 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.553984 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.560589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.579843 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.599591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.621152 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.671748 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpnk\" (UniqueName: \"kubernetes.io/projected/108460a3-822d-405c-a0fa-cdd12ea4123f-kube-api-access-fvpnk\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.690680 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.712459 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.718179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmmw\" (UniqueName: \"kubernetes.io/projected/1b75b5dd-f846-44d1-b751-5d8241200a89-kube-api-access-8dmmw\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.730958 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwn7v\" (UniqueName: \"kubernetes.io/projected/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-kube-api-access-hwn7v\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.741809 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.742566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.751094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtkq\" (UniqueName: \"kubernetes.io/projected/49db6ef3-ecbb-44ba-90bd-6b4fad356374-kube-api-access-9wtkq\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.760849 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.775663 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.780550 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.803780 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.808581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.827183 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 22:42:16 crc kubenswrapper[4793]: W0126 22:42:16.829915 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49db6ef3_ecbb_44ba_90bd_6b4fad356374.slice/crio-714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1 WatchSource:0}: Error finding container 714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1: Status 404 returned error can't find the container with id 714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1 Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839174 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.840923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.846068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.849225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.850283 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.901822 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.924576 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.938846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.958615 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.980917 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.986440 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nd4pl"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:16.999802 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.005323 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf"] Jan 26 22:42:17 crc kubenswrapper[4793]: W0126 22:42:17.016283 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b75b5dd_f846_44d1_b751_5d8241200a89.slice/crio-7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a WatchSource:0}: Error finding container 7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a: Status 404 returned error can't find the container with id 7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.019977 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.034149 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.055894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.072969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.078177 4793 request.go:700] Waited for 1.869508847s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.084610 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhc46"] Jan 26 22:42:17 crc kubenswrapper[4793]: W0126 22:42:17.093407 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b2fe8d2_137a_46a7_a57e_94a1c4d91f8a.slice/crio-493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851 WatchSource:0}: Error finding container 493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851: Status 404 returned error can't find the container with id 493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.096461 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvbn\" (UniqueName: \"kubernetes.io/projected/614c35d6-ed0a-4b6d-9241-6df532fa9528-kube-api-access-sfvbn\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.121023 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms66w\" (UniqueName: \"kubernetes.io/projected/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-kube-api-access-ms66w\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.134099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.152591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrwc\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-kube-api-access-mfrwc\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.176464 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.180473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.194109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f225855-c898-4519-96fb-c0556fb46513-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.199063 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.214080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2f7z\" (UniqueName: \"kubernetes.io/projected/a7012682-3c72-4541-ae5b-5c1522508f39-kube-api-access-l2f7z\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.238081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82sc\" (UniqueName: \"kubernetes.io/projected/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-kube-api-access-p82sc\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.240800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.255554 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.256179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8jh\" (UniqueName: \"kubernetes.io/projected/57535671-8438-44b0-95f3-24679160fb8d-kube-api-access-vn8jh\") pod \"downloads-7954f5f757-2l78j\" (UID: \"57535671-8438-44b0-95f3-24679160fb8d\") " pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.263716 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.270180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.276795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72nth\" (UniqueName: \"kubernetes.io/projected/f6f869ac-7928-4749-9ba7-04ec01b48bc0-kube-api-access-72nth\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.296109 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.299134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnfcn\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-kube-api-access-rnfcn\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.304251 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.315142 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f111a8c3-da3b-48f7-aad2-693c62b93659-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.376348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.379156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsfh\" (UniqueName: \"kubernetes.io/projected/36b0f3df-e65a-41d3-b718-916bd868f437-kube-api-access-czsfh\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.385396 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.389577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.392341 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.396104 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f59h\" (UniqueName: \"kubernetes.io/projected/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-kube-api-access-4f59h\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.416160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbn82\" (UniqueName: \"kubernetes.io/projected/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-kube-api-access-mbn82\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.432868 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.435828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs6g7\" (UniqueName: \"kubernetes.io/projected/1c808b47-7d86-4456-ae60-cb83d2a58262-kube-api-access-xs6g7\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.463153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqs7g\" (UniqueName: \"kubernetes.io/projected/81bdaed4-2088-404f-a937-9a682635b5ab-kube-api-access-kqs7g\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.474613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2lx\" (UniqueName: \"kubernetes.io/projected/4f027876-8332-446c-9ea2-29d38ba7fcfc-kube-api-access-pq2lx\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.491223 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.521993 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.524082 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.524991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.542118 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.546441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.547929 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-tmpfs\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575294 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-metrics-certs\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z729h\" (UniqueName: \"kubernetes.io/projected/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-kube-api-access-z729h\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-certs\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dtg\" (UniqueName: \"kubernetes.io/projected/801796b0-9f86-4104-90bb-1722280f5bfd-kube-api-access-p7dtg\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/801796b0-9f86-4104-90bb-1722280f5bfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576070 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dfaef85e-a778-46ff-976e-616a2128a811-signing-key\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjp9x\" (UniqueName: \"kubernetes.io/projected/0ed71eac-dbf3-4832-add7-58fd93339c41-kube-api-access-wjp9x\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-stats-auth\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576238 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-webhook-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-default-certificate\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576515 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576532 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.578970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581298 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdr95\" (UniqueName: \"kubernetes.io/projected/685cffa9-987c-4800-b908-d5a6716e25b4-kube-api-access-jdr95\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581379 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nr5\" (UniqueName: \"kubernetes.io/projected/dfaef85e-a778-46ff-976e-616a2128a811-kube-api-access-p2nr5\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581441 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-service-ca-bundle\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581626 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-node-bootstrap-token\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581645 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581727 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581773 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-srv-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581790 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrgr\" (UniqueName: \"kubernetes.io/projected/a66e9f9f-4696-49ba-acab-6e131a5efb91-kube-api-access-xvrgr\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4cbx\" (UniqueName: \"kubernetes.io/projected/74772f4c-89ee-4080-9cb4-90ef4170a726-kube-api-access-v4cbx\") pod \"migrator-59844c95c7-dlt29\" (UID: \"74772f4c-89ee-4080-9cb4-90ef4170a726\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4s7z\" (UniqueName: \"kubernetes.io/projected/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-kube-api-access-m4s7z\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582178 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4bq9\" (UniqueName: \"kubernetes.io/projected/09d90176-aba2-4498-a4c4-dc240df81c98-kube-api-access-l4bq9\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584110 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41617d0a-24b4-47c9-b970-4cbab31285eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584163 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wv4\" (UniqueName: \"kubernetes.io/projected/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-kube-api-access-n6wv4\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584428 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d90176-aba2-4498-a4c4-dc240df81c98-metrics-tls\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4s7\" (UniqueName: \"kubernetes.io/projected/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-kube-api-access-2f4s7\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.584584 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.084564221 +0000 UTC m=+153.073335733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584626 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9m7s\" (UniqueName: \"kubernetes.io/projected/41617d0a-24b4-47c9-b970-4cbab31285eb-kube-api-access-m9m7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584934 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66e9f9f-4696-49ba-acab-6e131a5efb91-config-volume\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584955 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a66e9f9f-4696-49ba-acab-6e131a5efb91-metrics-tls\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dfaef85e-a778-46ff-976e-616a2128a811-signing-cabundle\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.585181 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.585799 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.586367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.586703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-proxy-tls\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.586729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.613964 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.620994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" event={"ID":"49db6ef3-ecbb-44ba-90bd-6b4fad356374","Type":"ContainerStarted","Data":"e9dd6b06687e6f394029beaa70efd6dbb9f84f49af37365addaa8a9a121ad724"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.621386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" event={"ID":"49db6ef3-ecbb-44ba-90bd-6b4fad356374","Type":"ContainerStarted","Data":"39767c6b73ed145b14edede4182e51a7be323599092779f886fca5215ce1dbe4"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.621398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" event={"ID":"49db6ef3-ecbb-44ba-90bd-6b4fad356374","Type":"ContainerStarted","Data":"714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.622551 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.629580 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.641325 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerStarted","Data":"96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.641408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerStarted","Data":"35ef9fd4d9f1a8c21342432b8e94e83c86e086b0ceab6b71120d185cc8b1ccf3"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.641427 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.651746 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r5m9x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.651809 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.671395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" event={"ID":"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a","Type":"ContainerStarted","Data":"e89846f52d475c475c4920686e04aeb7e7fb70427680812e0e1875c0100432ce"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.671450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" event={"ID":"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a","Type":"ContainerStarted","Data":"6f8073104a10054235daa35ee7eab491effbd82f3b32302a022237546d412b0f"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.671460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" event={"ID":"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a","Type":"ContainerStarted","Data":"493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.675321 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.682837 4793 generic.go:334] "Generic (PLEG): container finished" podID="108460a3-822d-405c-a0fa-cdd12ea4123f" containerID="ee9e8a33b4e192ef95cca2fec2e7441dd0a1edfb9cb8f81a28b301e049e2951f" exitCode=0 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.682994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerDied","Data":"ee9e8a33b4e192ef95cca2fec2e7441dd0a1edfb9cb8f81a28b301e049e2951f"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.683035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerStarted","Data":"18aaeeda62e42829161ac2d7ae8e6d20aed4a24d6c48f25d47d1566e341f9a84"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.699936 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.700624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.701017 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.200985779 +0000 UTC m=+153.189757341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.701860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-service-ca-bundle\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.700829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-service-ca-bundle\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.702652 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.702693 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-node-bootstrap-token\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.702709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703301 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-plugins-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703366 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-srv-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrgr\" (UniqueName: \"kubernetes.io/projected/a66e9f9f-4696-49ba-acab-6e131a5efb91-kube-api-access-xvrgr\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4cbx\" (UniqueName: \"kubernetes.io/projected/74772f4c-89ee-4080-9cb4-90ef4170a726-kube-api-access-v4cbx\") pod \"migrator-59844c95c7-dlt29\" (UID: \"74772f4c-89ee-4080-9cb4-90ef4170a726\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4s7z\" (UniqueName: \"kubernetes.io/projected/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-kube-api-access-m4s7z\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703614 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0965955-3d25-41e9-a3bc-73c19a206418-cert\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703657 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703697 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703992 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-socket-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704016 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4bq9\" (UniqueName: \"kubernetes.io/projected/09d90176-aba2-4498-a4c4-dc240df81c98-kube-api-access-l4bq9\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704035 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41617d0a-24b4-47c9-b970-4cbab31285eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704063 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6wv4\" (UniqueName: \"kubernetes.io/projected/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-kube-api-access-n6wv4\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704079 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-registration-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d90176-aba2-4498-a4c4-dc240df81c98-metrics-tls\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4s7\" (UniqueName: \"kubernetes.io/projected/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-kube-api-access-2f4s7\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9m7s\" (UniqueName: \"kubernetes.io/projected/41617d0a-24b4-47c9-b970-4cbab31285eb-kube-api-access-m9m7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dfaef85e-a778-46ff-976e-616a2128a811-signing-cabundle\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66e9f9f-4696-49ba-acab-6e131a5efb91-config-volume\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704284 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a66e9f9f-4696-49ba-acab-6e131a5efb91-metrics-tls\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704364 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcxs\" (UniqueName: \"kubernetes.io/projected/e0965955-3d25-41e9-a3bc-73c19a206418-kube-api-access-5kcxs\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-proxy-tls\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rhjg\" (UniqueName: \"kubernetes.io/projected/cdcb0afe-af56-4b15-b712-5443e772c75d-kube-api-access-5rhjg\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704523 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-tmpfs\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704569 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-metrics-certs\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704604 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z729h\" (UniqueName: \"kubernetes.io/projected/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-kube-api-access-z729h\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704622 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-certs\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dtg\" (UniqueName: \"kubernetes.io/projected/801796b0-9f86-4104-90bb-1722280f5bfd-kube-api-access-p7dtg\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-csi-data-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/801796b0-9f86-4104-90bb-1722280f5bfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dfaef85e-a778-46ff-976e-616a2128a811-signing-key\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjp9x\" (UniqueName: \"kubernetes.io/projected/0ed71eac-dbf3-4832-add7-58fd93339c41-kube-api-access-wjp9x\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-stats-auth\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705724 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-webhook-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-default-certificate\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705778 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdr95\" (UniqueName: \"kubernetes.io/projected/685cffa9-987c-4800-b908-d5a6716e25b4-kube-api-access-jdr95\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705978 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nr5\" (UniqueName: \"kubernetes.io/projected/dfaef85e-a778-46ff-976e-616a2128a811-kube-api-access-p2nr5\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706427 4793 generic.go:334] "Generic (PLEG): container finished" podID="1b75b5dd-f846-44d1-b751-5d8241200a89" containerID="c95ab1e0db1bcb0a2b24b20f5a530f5f04ffb796ac1f139715840c6e4cf507fa" exitCode=0 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706747 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" event={"ID":"1b75b5dd-f846-44d1-b751-5d8241200a89","Type":"ContainerDied","Data":"c95ab1e0db1bcb0a2b24b20f5a530f5f04ffb796ac1f139715840c6e4cf507fa"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" event={"ID":"1b75b5dd-f846-44d1-b751-5d8241200a89","Type":"ContainerStarted","Data":"7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706973 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.707743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.710810 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.711092 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.714768 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.714814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.716566 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66e9f9f-4696-49ba-acab-6e131a5efb91-config-volume\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-stats-auth\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dfaef85e-a778-46ff-976e-616a2128a811-signing-cabundle\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717665 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-node-bootstrap-token\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-tmpfs\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.718898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.719089 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.719303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerStarted","Data":"4ef809a2826a84fc4c6df7b47339752d4935d8cadd1765fb919c6712c00c442f"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.725173 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.725862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.727349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-metrics-certs\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.727962 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.728413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a66e9f9f-4696-49ba-acab-6e131a5efb91-metrics-tls\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.729578 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.229551987 +0000 UTC m=+153.218323499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.729709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-proxy-tls\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.730442 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dfaef85e-a778-46ff-976e-616a2128a811-signing-key\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.731711 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/801796b0-9f86-4104-90bb-1722280f5bfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.734222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.734947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d90176-aba2-4498-a4c4-dc240df81c98-metrics-tls\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: W0126 22:42:17.735070 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb75b68da_88f9_46b9_a8c3_f0fa8cb5e3af.slice/crio-600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5 WatchSource:0}: Error finding container 600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5: Status 404 returned error can't find the container with id 600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.736379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.740169 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-certs\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.744652 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.748026 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-default-certificate\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.751362 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-webhook-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.751706 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.766398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nr5\" (UniqueName: \"kubernetes.io/projected/dfaef85e-a778-46ff-976e-616a2128a811-kube-api-access-p2nr5\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.766454 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41617d0a-24b4-47c9-b970-4cbab31285eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.767068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdr95\" (UniqueName: \"kubernetes.io/projected/685cffa9-987c-4800-b908-d5a6716e25b4-kube-api-access-jdr95\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.767288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-srv-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.775353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4cbx\" (UniqueName: \"kubernetes.io/projected/74772f4c-89ee-4080-9cb4-90ef4170a726-kube-api-access-v4cbx\") pod \"migrator-59844c95c7-dlt29\" (UID: \"74772f4c-89ee-4080-9cb4-90ef4170a726\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.778872 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.790348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.793683 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.793729 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.794248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrgr\" (UniqueName: \"kubernetes.io/projected/a66e9f9f-4696-49ba-acab-6e131a5efb91-kube-api-access-xvrgr\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.807321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.807694 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.307638581 +0000 UTC m=+153.296410093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.807904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rhjg\" (UniqueName: \"kubernetes.io/projected/cdcb0afe-af56-4b15-b712-5443e772c75d-kube-api-access-5rhjg\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.807976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-csi-data-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808117 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-plugins-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0965955-3d25-41e9-a3bc-73c19a206418-cert\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808284 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808307 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-socket-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-registration-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcxs\" (UniqueName: \"kubernetes.io/projected/e0965955-3d25-41e9-a3bc-73c19a206418-kube-api-access-5kcxs\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.809820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-registration-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.811145 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-csi-data-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.812664 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.813696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-plugins-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.814238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-socket-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.814584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.814824 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.314796108 +0000 UTC m=+153.303567620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.832984 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0965955-3d25-41e9-a3bc-73c19a206418-cert\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.838294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9m7s\" (UniqueName: \"kubernetes.io/projected/41617d0a-24b4-47c9-b970-4cbab31285eb-kube-api-access-m9m7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.839018 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.864444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.874103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4s7\" (UniqueName: \"kubernetes.io/projected/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-kube-api-access-2f4s7\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.877404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.894390 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.910161 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.910363 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.410332112 +0000 UTC m=+153.399103624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.911582 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.912444 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.412429265 +0000 UTC m=+153.401200777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.922757 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4s7z\" (UniqueName: \"kubernetes.io/projected/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-kube-api-access-m4s7z\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.936317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6wv4\" (UniqueName: \"kubernetes.io/projected/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-kube-api-access-n6wv4\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.942133 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2l78j"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.946155 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.979755 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.984999 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.999509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjp9x\" (UniqueName: \"kubernetes.io/projected/0ed71eac-dbf3-4832-add7-58fd93339c41-kube-api-access-wjp9x\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.002035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4bq9\" (UniqueName: \"kubernetes.io/projected/09d90176-aba2-4498-a4c4-dc240df81c98-kube-api-access-l4bq9\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.007758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.013397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.013680 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.513665102 +0000 UTC m=+153.502436614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.014011 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.016572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z729h\" (UniqueName: \"kubernetes.io/projected/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-kube-api-access-z729h\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.026750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.048330 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.064242 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dtg\" (UniqueName: \"kubernetes.io/projected/801796b0-9f86-4104-90bb-1722280f5bfd-kube-api-access-p7dtg\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.065001 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.078453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcxs\" (UniqueName: \"kubernetes.io/projected/e0965955-3d25-41e9-a3bc-73c19a206418-kube-api-access-5kcxs\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.081037 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf111a8c3_da3b_48f7_aad2_693c62b93659.slice/crio-69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683 WatchSource:0}: Error finding container 69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683: Status 404 returned error can't find the container with id 69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683 Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.097418 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57535671_8438_44b0_95f3_24679160fb8d.slice/crio-4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015 WatchSource:0}: Error finding container 4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015: Status 404 returned error can't find the container with id 4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015 Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.104254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.105218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.106848 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rhjg\" (UniqueName: \"kubernetes.io/projected/cdcb0afe-af56-4b15-b712-5443e772c75d-kube-api-access-5rhjg\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.123248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.123744 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.623704186 +0000 UTC m=+153.612475698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.127327 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.129727 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.136761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.157696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.172131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.177727 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.197991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.224517 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.224869 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.72483086 +0000 UTC m=+153.713602372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.225048 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.225445 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.725434728 +0000 UTC m=+153.714206240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.279552 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g9vhm"] Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.280934 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17125aa_eb99_4bad_a99d_44b86be4f09d.slice/crio-52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c WatchSource:0}: Error finding container 52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c: Status 404 returned error can't find the container with id 52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.281897 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2fq8c"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.301148 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxxfj"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.322526 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.322568 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.325807 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.325924 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.825903612 +0000 UTC m=+153.814675124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.326328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.327373 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.827350856 +0000 UTC m=+153.816122388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.430168 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.430863 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.93084099 +0000 UTC m=+153.919612502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.435840 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36b0f3df_e65a_41d3_b718_916bd868f437.slice/crio-1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3 WatchSource:0}: Error finding container 1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3: Status 404 returned error can't find the container with id 1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3 Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.455947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.458568 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.519757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.531570 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.533899 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.534468 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.034444088 +0000 UTC m=+154.023215600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.538847 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.540937 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" podStartSLOduration=126.540922075 podStartE2EDuration="2m6.540922075s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:18.511906593 +0000 UTC m=+153.500678105" watchObservedRunningTime="2026-01-26 22:42:18.540922075 +0000 UTC m=+153.529693587" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.593759 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podStartSLOduration=126.59370658899999 podStartE2EDuration="2m6.593706589s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:18.569765162 +0000 UTC m=+153.558536674" watchObservedRunningTime="2026-01-26 22:42:18.593706589 +0000 UTC m=+153.582478101" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.597913 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.602066 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pj7sl"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.623032 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.634880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.635655 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.135629734 +0000 UTC m=+154.124401246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.635807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.636423 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.136391587 +0000 UTC m=+154.125163089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.739007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.739672 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.239647115 +0000 UTC m=+154.228418627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.788471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-65h9z" event={"ID":"0ed71eac-dbf3-4832-add7-58fd93339c41","Type":"ContainerStarted","Data":"e24638c37778163b679e0bdfbdfc3278e0eed094ccce1232b194269a769f6999"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.823975 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podStartSLOduration=126.823952457 podStartE2EDuration="2m6.823952457s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:18.823758851 +0000 UTC m=+153.812530363" watchObservedRunningTime="2026-01-26 22:42:18.823952457 +0000 UTC m=+153.812723959" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.825009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerStarted","Data":"536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.826626 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.826696 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.830503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" event={"ID":"1b75b5dd-f846-44d1-b751-5d8241200a89","Type":"ContainerStarted","Data":"4681f81b6f33ccab284c7ff4b9370dd801615735b68d9e0f3874526865782d3a"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.832007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" event={"ID":"ea3e8958-c4f8-41bc-b5ca-6be701416ea7","Type":"ContainerStarted","Data":"c4ba84741222ce1cb6218d2904fc86045acbac92cff11fc0bce82595e39aa018"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.834868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" event={"ID":"f6f869ac-7928-4749-9ba7-04ec01b48bc0","Type":"ContainerStarted","Data":"1cd8a394c15ddb672f635ba1fce8177e9bcf93b89ea00e9148005262ef5cd3ec"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.839977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.840944 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerStarted","Data":"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.840971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerStarted","Data":"fc7e45cc09b7e136cabcbaac23e466557070638bb704168bde4eed70e8068d9f"} Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.841420 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.341407648 +0000 UTC m=+154.330179160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.846974 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" event={"ID":"36b0f3df-e65a-41d3-b718-916bd868f437","Type":"ContainerStarted","Data":"1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.862804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerStarted","Data":"52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.867835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" event={"ID":"a7012682-3c72-4541-ae5b-5c1522508f39","Type":"ContainerStarted","Data":"e05a8000f8913cd20aa6207c0cc680fa4913ef20af8843ed84f893383ad64e8d"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.867871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" event={"ID":"a7012682-3c72-4541-ae5b-5c1522508f39","Type":"ContainerStarted","Data":"f8ba1ce666061f394119cd3ca7ef436f6b73f9e93cc1b1446bb221e11e9a0f62"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.904018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" event={"ID":"6ae9ab97-d0d9-4ba4-a842-fa74943633ba","Type":"ContainerStarted","Data":"b778976adac39956f1fcc61d6051ee163ea68f2f24e511f41050506715496d6e"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.919281 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" event={"ID":"4f027876-8332-446c-9ea2-29d38ba7fcfc","Type":"ContainerStarted","Data":"679134a08020f50eeb319b148f1fe1687fda64c50ef9bfa6369611cdec7e0d8a"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.944899 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.945087 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.445058818 +0000 UTC m=+154.433830330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.945994 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.946605 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.446597354 +0000 UTC m=+154.435368866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.958997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" event={"ID":"81bdaed4-2088-404f-a937-9a682635b5ab","Type":"ContainerStarted","Data":"4c287b0751d0f15110c2d23d47e37e261c02fff7d2870afeed6dd4327bc02d9c"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.962381 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" event={"ID":"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d","Type":"ContainerStarted","Data":"84a12e8194ec14c3637645ecc9335c8343191862b3d5fb8c20784e16155623ec"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.964787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" event={"ID":"2f225855-c898-4519-96fb-c0556fb46513","Type":"ContainerStarted","Data":"357aefffba60b96fbe621f24bee5a12cce70876bcc4866051f6a70df07eaa74d"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.975523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2l78j" event={"ID":"57535671-8438-44b0-95f3-24679160fb8d","Type":"ContainerStarted","Data":"4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.976271 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.977067 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.977118 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.982954 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685cffa9_987c_4800_b908_d5a6716e25b4.slice/crio-e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5 WatchSource:0}: Error finding container e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5: Status 404 returned error can't find the container with id e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5 Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.987020 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerStarted","Data":"9c173340ca8ab3e87c8f7274f668f8cd11fd21616f5273abc5154e4e68c1710c"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.994035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" event={"ID":"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e","Type":"ContainerStarted","Data":"a11e07e424226e6cab2d8951e32fbb129b6f1fdac3d1833d57da18499dc8cfb4"} Jan 26 22:42:19 crc kubenswrapper[4793]: W0126 22:42:19.011896 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c808b47_7d86_4456_ae60_cb83d2a58262.slice/crio-4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485 WatchSource:0}: Error finding container 4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485: Status 404 returned error can't find the container with id 4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485 Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.014309 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" event={"ID":"f111a8c3-da3b-48f7-aad2-693c62b93659","Type":"ContainerStarted","Data":"69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.029370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xpkxl" event={"ID":"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6","Type":"ContainerStarted","Data":"112c222b8cb653648bcb4693b6aa55f055488e71c0c9795404beae8f592dd5d8"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.039800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" event={"ID":"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec","Type":"ContainerStarted","Data":"3924b8750b2e795e17b40b9fb7874af95661a4a181beed310c4b6d32b998c205"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.039850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" event={"ID":"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec","Type":"ContainerStarted","Data":"914d04d56feb7a9c5ead7fb33b39b2c046c63eb407160d2af49d6084ec87388a"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.041105 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jv28b"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.044987 4793 generic.go:334] "Generic (PLEG): container finished" podID="b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af" containerID="f8c13020df21bae69d5cfc42233d948fb82e3b070172af0a5386752ffd58c8df" exitCode=0 Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.045428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" event={"ID":"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af","Type":"ContainerDied","Data":"f8c13020df21bae69d5cfc42233d948fb82e3b070172af0a5386752ffd58c8df"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.045505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" event={"ID":"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af","Type":"ContainerStarted","Data":"600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.049712 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.052365 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.552346388 +0000 UTC m=+154.541117900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.052555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" event={"ID":"614c35d6-ed0a-4b6d-9241-6df532fa9528","Type":"ContainerStarted","Data":"ba8f3925b54ba7ffb434a8e44462ed039b5a860c5a16d87b7ec57cf9c726e9f3"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.053934 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" event={"ID":"614c35d6-ed0a-4b6d-9241-6df532fa9528","Type":"ContainerStarted","Data":"ae8900e7b166d622b4268c24095477d96c34b6314803c9e1cf2a2165ba8556f6"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.073765 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.103044 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.154571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.160318 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.660269958 +0000 UTC m=+154.649041470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.190353 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.195616 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kszs"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.255092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.255390 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.755367978 +0000 UTC m=+154.744139490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.310411 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.316232 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.318067 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vn8zr"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.341165 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfbz"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.349941 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.356650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.357122 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.85709947 +0000 UTC m=+154.845870982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.438421 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rz5xt"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.440254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.451987 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.459396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.459746 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.95973162 +0000 UTC m=+154.948503132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.561409 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.561854 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.061831122 +0000 UTC m=+155.050602635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.662618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.662790 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.16276263 +0000 UTC m=+155.151534142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.662891 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.663302 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.163294236 +0000 UTC m=+155.152065748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.764282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.764495 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.26445052 +0000 UTC m=+155.253222032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.764662 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.765055 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.265023128 +0000 UTC m=+155.253794650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.865413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.865615 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.365586224 +0000 UTC m=+155.354357736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.866004 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.866401 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.366390119 +0000 UTC m=+155.355161631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.886970 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" podStartSLOduration=127.886955734 podStartE2EDuration="2m7.886955734s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:19.886007195 +0000 UTC m=+154.874778717" watchObservedRunningTime="2026-01-26 22:42:19.886955734 +0000 UTC m=+154.875727246" Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.908979 4793 csr.go:261] certificate signing request csr-zv74z is approved, waiting to be issued Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.925787 4793 csr.go:257] certificate signing request csr-zv74z is issued Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.966627 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.966878 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.466864892 +0000 UTC m=+155.455636394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.000344 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" podStartSLOduration=128.000323719 podStartE2EDuration="2m8.000323719s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:19.998491543 +0000 UTC m=+154.987263065" watchObservedRunningTime="2026-01-26 22:42:20.000323719 +0000 UTC m=+154.989095231" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.058461 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" podStartSLOduration=128.058444375 podStartE2EDuration="2m8.058444375s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.056319601 +0000 UTC m=+155.045091113" watchObservedRunningTime="2026-01-26 22:42:20.058444375 +0000 UTC m=+155.047215887" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.058994 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-57ccj" podStartSLOduration=128.058988012 podStartE2EDuration="2m8.058988012s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.030847657 +0000 UTC m=+155.019619169" watchObservedRunningTime="2026-01-26 22:42:20.058988012 +0000 UTC m=+155.047759524" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.069821 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.070694 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.570680567 +0000 UTC m=+155.559452079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.084630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" event={"ID":"f111a8c3-da3b-48f7-aad2-693c62b93659","Type":"ContainerStarted","Data":"1dd1f24f90a7871125b38db0808a6ec03e999cfdc8e6c92a2abf285b05d72742"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.106429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" event={"ID":"dfaef85e-a778-46ff-976e-616a2128a811","Type":"ContainerStarted","Data":"2bb368465f2dda10e5411dffd6581c5e9f6aa711f84c58e5ece12f848645e16a"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.106496 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" event={"ID":"dfaef85e-a778-46ff-976e-616a2128a811","Type":"ContainerStarted","Data":"d8e4799f6fe3f347fc39ed8ded3eaa463b2f88336348453e3deb237201bef022"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.119124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" event={"ID":"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d","Type":"ContainerStarted","Data":"519194ab2b1b2d6ee349e477986d2db39f0a990a30d601aac2530787a39d4528"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.121714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" event={"ID":"41617d0a-24b4-47c9-b970-4cbab31285eb","Type":"ContainerStarted","Data":"09d0eac78f32807cfcc281aa405a2ac286813a6d91d2120754532d6910b0b1d1"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.125738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerStarted","Data":"fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.125796 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.133379 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lvnpc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.133449 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.171460 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.172439 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.672410709 +0000 UTC m=+155.661182221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.183586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"728afac09b64a24556094731b0bc6273dc13c7ef835884946b96b913198b3f8e"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.187564 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2l78j" podStartSLOduration=128.187543289 podStartE2EDuration="2m8.187543289s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.185070914 +0000 UTC m=+155.173842436" watchObservedRunningTime="2026-01-26 22:42:20.187543289 +0000 UTC m=+155.176314801" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.187862 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" podStartSLOduration=128.187857338 podStartE2EDuration="2m8.187857338s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.142307194 +0000 UTC m=+155.131078706" watchObservedRunningTime="2026-01-26 22:42:20.187857338 +0000 UTC m=+155.176628850" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.216039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2l78j" event={"ID":"57535671-8438-44b0-95f3-24679160fb8d","Type":"ContainerStarted","Data":"c5771c6c061f963913b73a7dc637cfade05842093c91eba1f5e6b9f2a68298f5"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.216981 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.217016 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.220958 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" podStartSLOduration=128.220934314 podStartE2EDuration="2m8.220934314s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.215451197 +0000 UTC m=+155.204222709" watchObservedRunningTime="2026-01-26 22:42:20.220934314 +0000 UTC m=+155.209705826" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.240162 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" event={"ID":"ea3e8958-c4f8-41bc-b5ca-6be701416ea7","Type":"ContainerStarted","Data":"4884e01494e13e16506fd3435514f3207e0f7b8263f94afc51565a1f3d87f275"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.244244 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" event={"ID":"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c","Type":"ContainerStarted","Data":"57f0beeea5e3ec1513ec4de6915e931c6ec5ae086fb38fe05c2d20a971915d31"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.244265 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" event={"ID":"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c","Type":"ContainerStarted","Data":"9102b23668ac789ae3ef4986aab063c6c62002b9ff73c4f66dbf881941955b3c"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.273706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.274155 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.774138341 +0000 UTC m=+155.762909853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.281917 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerStarted","Data":"28cfe07dd43a3ebb532b016f17952bf0c5719887ac6a2caec5afc790fff10381"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.298033 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podStartSLOduration=128.298008976 podStartE2EDuration="2m8.298008976s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.26195294 +0000 UTC m=+155.250724462" watchObservedRunningTime="2026-01-26 22:42:20.298008976 +0000 UTC m=+155.286780498" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.308592 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xpkxl" event={"ID":"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6","Type":"ContainerStarted","Data":"43b8679abe3ec73a9e35eb062d7545ae5d61f7f2c37f26adf9c7a394bc5375fc"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.319974 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" podStartSLOduration=128.319956123 podStartE2EDuration="2m8.319956123s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.300350717 +0000 UTC m=+155.289122229" watchObservedRunningTime="2026-01-26 22:42:20.319956123 +0000 UTC m=+155.308727645" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.335977 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" podStartSLOduration=128.335953659 podStartE2EDuration="2m8.335953659s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.329912426 +0000 UTC m=+155.318683938" watchObservedRunningTime="2026-01-26 22:42:20.335953659 +0000 UTC m=+155.324725191" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.338080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" event={"ID":"74772f4c-89ee-4080-9cb4-90ef4170a726","Type":"ContainerStarted","Data":"4fe5fac61d8bcbaad8682bd09bfd4ba1ef2dd38e8e4420b580f4e94df196e726"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.348594 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" event={"ID":"f6f869ac-7928-4749-9ba7-04ec01b48bc0","Type":"ContainerStarted","Data":"56df71948f04df090d995ba09e86f6fa531a10e0a9572412a5b855b49cd6ba7b"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.349742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.356326 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-g9vhm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.356383 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" podUID="f6f869ac-7928-4749-9ba7-04ec01b48bc0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.375021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.375595 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.875567873 +0000 UTC m=+155.864339375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.376790 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xpkxl" podStartSLOduration=128.3767657 podStartE2EDuration="2m8.3767657s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.374856362 +0000 UTC m=+155.363627874" watchObservedRunningTime="2026-01-26 22:42:20.3767657 +0000 UTC m=+155.365537212" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.379548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerStarted","Data":"7bb09463affe65b8c8341ff8758a1535e4d3942375b31382f2863df221ac542b"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.389852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" event={"ID":"33ca38dd-04b2-48d0-8bdd-84b05d96ce92","Type":"ContainerStarted","Data":"7fa4766f4e560eff9395dce8f1434196a478bb1db88308694f733caf8cce5825"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.394816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" event={"ID":"4f027876-8332-446c-9ea2-29d38ba7fcfc","Type":"ContainerStarted","Data":"c78299e65e90b02694396d058c02447fefca5b08a61bbcd41705090715346153"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.395389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.396648 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4dq5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.396706 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" podUID="4f027876-8332-446c-9ea2-29d38ba7fcfc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.404057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" event={"ID":"1c808b47-7d86-4456-ae60-cb83d2a58262","Type":"ContainerStarted","Data":"4afc3787d038b7f211e9f1794499bc7f0221a55d43785fac9fcc416505f09c3d"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.404100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" event={"ID":"1c808b47-7d86-4456-ae60-cb83d2a58262","Type":"ContainerStarted","Data":"4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.407986 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rz5xt" event={"ID":"e0965955-3d25-41e9-a3bc-73c19a206418","Type":"ContainerStarted","Data":"88a9796495583d25b8e5202eee2269e353e6d66e13a0d8b1dbad34d9e341cf0c"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.411309 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerStarted","Data":"f22596edcefced17885ef893f7cee49907f5d9cebcaf8c8793ed7795985ac763"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.417818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" event={"ID":"6b5651da-f3c7-41fe-a7b5-c9a054827d3d","Type":"ContainerStarted","Data":"55eda6eff93920d03c6df4418723ce4588f89ac26d518b5f05d86a604f901b24"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.431389 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" podStartSLOduration=128.431359459 podStartE2EDuration="2m8.431359459s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.430514663 +0000 UTC m=+155.419286175" watchObservedRunningTime="2026-01-26 22:42:20.431359459 +0000 UTC m=+155.420130971" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.451596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" event={"ID":"801796b0-9f86-4104-90bb-1722280f5bfd","Type":"ContainerStarted","Data":"6464863e6fa11f4ee0ad9cc3266a4177a63cbc37916e8c319133579f29ccaf25"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.467923 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" podStartSLOduration=128.467900989 podStartE2EDuration="2m8.467900989s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.464366302 +0000 UTC m=+155.453137814" watchObservedRunningTime="2026-01-26 22:42:20.467900989 +0000 UTC m=+155.456672501" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.477526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.480745 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.980731009 +0000 UTC m=+155.969502521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.494754 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" podStartSLOduration=128.494729675 podStartE2EDuration="2m8.494729675s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.49458184 +0000 UTC m=+155.483353352" watchObservedRunningTime="2026-01-26 22:42:20.494729675 +0000 UTC m=+155.483501187" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.508258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" event={"ID":"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af","Type":"ContainerStarted","Data":"a0eff1dbbeb513457356d440c74f85eac8f3d7002b6df959f9b56edd914966fe"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.508562 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.512700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" event={"ID":"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff","Type":"ContainerStarted","Data":"e22d95b1378ed93b62d43bd40d54a953419eccba50bffab7aee56f9d82b494ce"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.521106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" event={"ID":"36b0f3df-e65a-41d3-b718-916bd868f437","Type":"ContainerStarted","Data":"06b092ac2aa030f2688f72aec8bbe2d5905e42f75feda5dbced981dfc4885ce9"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.527557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" event={"ID":"614c35d6-ed0a-4b6d-9241-6df532fa9528","Type":"ContainerStarted","Data":"9cd4ef78972a667059c90bb468eb0a0fb8e5fc18225cff19e7b7f03070ecb96e"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.531947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" event={"ID":"2f225855-c898-4519-96fb-c0556fb46513","Type":"ContainerStarted","Data":"2711ba92af789bbdf6da58ab3dbf6431628bef7878559a17a7461ad1b0472787"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.532847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" event={"ID":"81bdaed4-2088-404f-a937-9a682635b5ab","Type":"ContainerStarted","Data":"2f0686c483eef76044cf59a0a38731780840bc9e0cdf353692e5b681dc3a63a7"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.533750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv28b" event={"ID":"a66e9f9f-4696-49ba-acab-6e131a5efb91","Type":"ContainerStarted","Data":"b96a8b619b88ff3b2d45d955f1320c704ca8047e2077990c463545927b3ecd57"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.534498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" event={"ID":"09d90176-aba2-4498-a4c4-dc240df81c98","Type":"ContainerStarted","Data":"afaf4d3ad878b43967e67d389c33620a46fa5185b00f228e1d88d8d33e509f7f"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.542467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" event={"ID":"685cffa9-987c-4800-b908-d5a6716e25b4","Type":"ContainerStarted","Data":"e97e7bf1b91c0a433c1f12c8271871780da24bc3beda87bc27b40f2d8f236632"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.542535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" event={"ID":"685cffa9-987c-4800-b908-d5a6716e25b4","Type":"ContainerStarted","Data":"e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.544105 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" podStartSLOduration=128.544088645 podStartE2EDuration="2m8.544088645s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.541154096 +0000 UTC m=+155.529925608" watchObservedRunningTime="2026-01-26 22:42:20.544088645 +0000 UTC m=+155.532860157" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.544232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.545643 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2x6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.546123 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" podUID="685cffa9-987c-4800-b908-d5a6716e25b4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.546438 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" event={"ID":"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e","Type":"ContainerStarted","Data":"fc3e48629f984ecf6485f8835f3823d2582d35f6c8dadefc7306d998227d1e2d"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.551700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-65h9z" event={"ID":"0ed71eac-dbf3-4832-add7-58fd93339c41","Type":"ContainerStarted","Data":"1de510e62a9ceea378f43335e071a9e3a95f69916d181b873578be79ff8007c0"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.556872 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.556917 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.579620 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.580720 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.080700598 +0000 UTC m=+156.069472110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.586465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" podStartSLOduration=128.586416491 podStartE2EDuration="2m8.586416491s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.58375528 +0000 UTC m=+155.572526802" watchObservedRunningTime="2026-01-26 22:42:20.586416491 +0000 UTC m=+155.575188003" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.674851 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" podStartSLOduration=128.674832318 podStartE2EDuration="2m8.674832318s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.615567417 +0000 UTC m=+155.604338929" watchObservedRunningTime="2026-01-26 22:42:20.674832318 +0000 UTC m=+155.663603830" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.676491 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" podStartSLOduration=128.676481909 podStartE2EDuration="2m8.676481909s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.674818248 +0000 UTC m=+155.663589760" watchObservedRunningTime="2026-01-26 22:42:20.676481909 +0000 UTC m=+155.665253421" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.683056 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.692931 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.192895777 +0000 UTC m=+156.181667289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.704570 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" podStartSLOduration=128.704549792 podStartE2EDuration="2m8.704549792s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.70152794 +0000 UTC m=+155.690299452" watchObservedRunningTime="2026-01-26 22:42:20.704549792 +0000 UTC m=+155.693321304" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.733931 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" podStartSLOduration=128.733911034 podStartE2EDuration="2m8.733911034s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.730783859 +0000 UTC m=+155.719555361" watchObservedRunningTime="2026-01-26 22:42:20.733911034 +0000 UTC m=+155.722682546" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.770626 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" podStartSLOduration=128.770607219 podStartE2EDuration="2m8.770607219s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.770049372 +0000 UTC m=+155.758820884" watchObservedRunningTime="2026-01-26 22:42:20.770607219 +0000 UTC m=+155.759378731" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.784020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.784461 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.284417329 +0000 UTC m=+156.273188841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.823098 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-65h9z" podStartSLOduration=6.823080064 podStartE2EDuration="6.823080064s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.822424564 +0000 UTC m=+155.811196066" watchObservedRunningTime="2026-01-26 22:42:20.823080064 +0000 UTC m=+155.811851576" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.885782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.886117 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.386101359 +0000 UTC m=+156.374872871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.929511 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 22:37:19 +0000 UTC, rotation deadline is 2026-12-19 02:28:12.970852691 +0000 UTC Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.929574 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7827h45m52.041280301s for next certificate rotation Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.953074 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" podStartSLOduration=128.953050544 podStartE2EDuration="2m8.953050544s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.87921269 +0000 UTC m=+155.867984202" watchObservedRunningTime="2026-01-26 22:42:20.953050544 +0000 UTC m=+155.941822056" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.958515 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" podStartSLOduration=128.958490639 podStartE2EDuration="2m8.958490639s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.95027189 +0000 UTC m=+155.939043402" watchObservedRunningTime="2026-01-26 22:42:20.958490639 +0000 UTC m=+155.947262161" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.000683 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.001989 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.501925509 +0000 UTC m=+156.490697021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.028150 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.042612 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.042673 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.102023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.102418 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.602402453 +0000 UTC m=+156.591173955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.202918 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.203285 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.703263568 +0000 UTC m=+156.692035080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.304127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.304675 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.80465523 +0000 UTC m=+156.793426742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.405641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.405955 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.905939368 +0000 UTC m=+156.894710880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.506995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.507338 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.007326529 +0000 UTC m=+156.996098031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.561975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" event={"ID":"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d","Type":"ContainerStarted","Data":"3bd4e60ea831be7abd97ac89ea034c13f105afd80746f88f9f49b63d88e5999a"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.563862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" event={"ID":"6b5651da-f3c7-41fe-a7b5-c9a054827d3d","Type":"ContainerStarted","Data":"bc8bb0ed30eb61eee3e45b048ba9ec50fe5d2e13dec7f95c26677cae2edd6de2"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.564055 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.565627 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cq8pk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.565733 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" podUID="6b5651da-f3c7-41fe-a7b5-c9a054827d3d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.566968 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" event={"ID":"33ca38dd-04b2-48d0-8bdd-84b05d96ce92","Type":"ContainerStarted","Data":"6877adb4fff2b2005c12cea00d006616ca607c0fdf5d2cafde12dc0592e0a74a"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.567059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" event={"ID":"33ca38dd-04b2-48d0-8bdd-84b05d96ce92","Type":"ContainerStarted","Data":"e655827dc2c63c02fba1cc49c1cadd82c927c20af99d9ff83efb3b8463998f08"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.574386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" event={"ID":"09d90176-aba2-4498-a4c4-dc240df81c98","Type":"ContainerStarted","Data":"8e5ad5f9d5e3843e851452467a4800702fc19c71899d8a1fc5b382544172bc01"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.574412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" event={"ID":"09d90176-aba2-4498-a4c4-dc240df81c98","Type":"ContainerStarted","Data":"28641c7ca3fafc914ce53eb4655beb6491cd6920276685048b21368be95f372c"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.577675 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" event={"ID":"74772f4c-89ee-4080-9cb4-90ef4170a726","Type":"ContainerStarted","Data":"d7beae1c5747060aeaf943d138229b6f01f655f7f86be4b0a08f193d3171e3bb"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.577713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" event={"ID":"74772f4c-89ee-4080-9cb4-90ef4170a726","Type":"ContainerStarted","Data":"0364f5dd1eafec3d4ee871f892045c391e7c880ee6d381f83ac1eb8fc808a64e"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.585111 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" event={"ID":"801796b0-9f86-4104-90bb-1722280f5bfd","Type":"ContainerStarted","Data":"5fbc6c1ccd75631bac630ab8168d1c1626fc39d28efb94cc31bf7c3d36367ced"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.585149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" event={"ID":"801796b0-9f86-4104-90bb-1722280f5bfd","Type":"ContainerStarted","Data":"62cbe197a07b1c376d461a4867a8ee98a05e891460e13a940e94cbb2b0b175a3"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.589258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" event={"ID":"41617d0a-24b4-47c9-b970-4cbab31285eb","Type":"ContainerStarted","Data":"f3b612e32782d4ebb580b253a030066305d33224313a8b405c20be51e735b1dd"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.591037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" event={"ID":"6ae9ab97-d0d9-4ba4-a842-fa74943633ba","Type":"ContainerStarted","Data":"8390891d65c9980d57ef61ca061825430898e9b9c2d828c085528cc1a3d392ac"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.592854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" event={"ID":"1c808b47-7d86-4456-ae60-cb83d2a58262","Type":"ContainerStarted","Data":"6810cc196bd85fa52b4601661a78bd12a58347761fe24a47372bf619f9e11108"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.594320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerStarted","Data":"0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.594963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.596379 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rzprv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.596428 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.598118 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv28b" event={"ID":"a66e9f9f-4696-49ba-acab-6e131a5efb91","Type":"ContainerStarted","Data":"12eaec58684506bfdd5708acbdc52c55e0218650ecf9f7b1552ac1290fed9bbc"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.598231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv28b" event={"ID":"a66e9f9f-4696-49ba-acab-6e131a5efb91","Type":"ContainerStarted","Data":"bbabf566bf217a3d17afe75f42fb0faf48402c454476aa0454b3d3285add9bf7"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.598347 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.601107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" event={"ID":"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff","Type":"ContainerStarted","Data":"bef202b41bf024129969f25ad174b141d18380e33c8d445fca173d1b2e867e9a"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.601232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" event={"ID":"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff","Type":"ContainerStarted","Data":"7a1e3ad2fa950130846cfb51fec53ed3c7c45f6e8da4c768b678ab2e3e85764c"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.601422 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.602473 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rz5xt" event={"ID":"e0965955-3d25-41e9-a3bc-73c19a206418","Type":"ContainerStarted","Data":"f4226376128ca3b6da6cd286fae5c0b51c008aca2460a5c55addedd023dd631d"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.604695 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerStarted","Data":"cc61868d059ab224be57eb0623ed03f00405cf2898a129989b50ff353c392019"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.605411 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606139 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.605570 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-g9vhm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.605893 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lvnpc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606372 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606331 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" podUID="f6f869ac-7928-4749-9ba7-04ec01b48bc0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606082 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2x6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606437 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" podUID="685cffa9-987c-4800-b908-d5a6716e25b4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606582 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4dq5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606666 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" podUID="4f027876-8332-446c-9ea2-29d38ba7fcfc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.607300 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" podStartSLOduration=129.607285257 podStartE2EDuration="2m9.607285257s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.60572337 +0000 UTC m=+156.594494892" watchObservedRunningTime="2026-01-26 22:42:21.607285257 +0000 UTC m=+156.596056769" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.607487 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.607758 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.107742631 +0000 UTC m=+157.096514143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.633380 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" podStartSLOduration=129.63335892999999 podStartE2EDuration="2m9.63335893s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.633152953 +0000 UTC m=+156.621924465" watchObservedRunningTime="2026-01-26 22:42:21.63335893 +0000 UTC m=+156.622130442" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.702863 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" podStartSLOduration=129.702845981 podStartE2EDuration="2m9.702845981s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.676369497 +0000 UTC m=+156.665141009" watchObservedRunningTime="2026-01-26 22:42:21.702845981 +0000 UTC m=+156.691617493" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.704861 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" podStartSLOduration=129.704853302 podStartE2EDuration="2m9.704853302s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.700765628 +0000 UTC m=+156.689537140" watchObservedRunningTime="2026-01-26 22:42:21.704853302 +0000 UTC m=+156.693624814" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.710610 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.712798 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.212783104 +0000 UTC m=+157.201554616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.713998 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.714041 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.717070 4793 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nd4pl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.717127 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" podUID="108460a3-822d-405c-a0fa-cdd12ea4123f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.744283 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.744612 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.752431 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" podStartSLOduration=129.752419668 podStartE2EDuration="2m9.752419668s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.750054886 +0000 UTC m=+156.738826398" watchObservedRunningTime="2026-01-26 22:42:21.752419668 +0000 UTC m=+156.741191180" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.812774 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.813045 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.31302793 +0000 UTC m=+157.301799442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.841414 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" podStartSLOduration=129.841392392 podStartE2EDuration="2m9.841392392s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.806726449 +0000 UTC m=+156.795497971" watchObservedRunningTime="2026-01-26 22:42:21.841392392 +0000 UTC m=+156.830163904" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.842310 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jv28b" podStartSLOduration=7.84230556 podStartE2EDuration="7.84230556s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.84065584 +0000 UTC m=+156.829427352" watchObservedRunningTime="2026-01-26 22:42:21.84230556 +0000 UTC m=+156.831077072" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.880355 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podStartSLOduration=129.880337096 podStartE2EDuration="2m9.880337096s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.879545552 +0000 UTC m=+156.868317054" watchObservedRunningTime="2026-01-26 22:42:21.880337096 +0000 UTC m=+156.869108608" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.911965 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rz5xt" podStartSLOduration=7.911942706 podStartE2EDuration="7.911942706s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.910306627 +0000 UTC m=+156.899078139" watchObservedRunningTime="2026-01-26 22:42:21.911942706 +0000 UTC m=+156.900714218" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.915024 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.915990 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.415969849 +0000 UTC m=+157.404741361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.993634 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" podStartSLOduration=129.993615577 podStartE2EDuration="2m9.993615577s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.95357089 +0000 UTC m=+156.942342412" watchObservedRunningTime="2026-01-26 22:42:21.993615577 +0000 UTC m=+156.982387089" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.017734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.018095 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.518077201 +0000 UTC m=+157.506848713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.039371 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:22 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:22 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:22 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.039443 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.072821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" podStartSLOduration=130.072795754 podStartE2EDuration="2m10.072795754s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:22.00257681 +0000 UTC m=+156.991348322" watchObservedRunningTime="2026-01-26 22:42:22.072795754 +0000 UTC m=+157.061567266" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.120597 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.120979 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.620966748 +0000 UTC m=+157.609738260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.123477 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" podStartSLOduration=130.123453173 podStartE2EDuration="2m10.123453173s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:22.121328709 +0000 UTC m=+157.110100221" watchObservedRunningTime="2026-01-26 22:42:22.123453173 +0000 UTC m=+157.112224685" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.123860 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" podStartSLOduration=130.123854986 podStartE2EDuration="2m10.123854986s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:22.076536798 +0000 UTC m=+157.065308310" watchObservedRunningTime="2026-01-26 22:42:22.123854986 +0000 UTC m=+157.112626498" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.191235 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.222238 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.222554 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.722511604 +0000 UTC m=+157.711283116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.222834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.223314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.723307058 +0000 UTC m=+157.712078570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.323824 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.324040 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.824005699 +0000 UTC m=+157.812777211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.324182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.324566 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.824547905 +0000 UTC m=+157.813319417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.425951 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.426181 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.926142563 +0000 UTC m=+157.914914075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.426414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.426826 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.926793962 +0000 UTC m=+157.915565474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.528214 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.528357 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.028325498 +0000 UTC m=+158.017097000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.528390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.528733 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.02872596 +0000 UTC m=+158.017497472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.614035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"5ff07bb3005ac0c7f16049b08f8e759fda4cef2f194db398d6acbabcd0507b99"} Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.614749 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rzprv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.614816 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.627100 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.629338 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.629544 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.129512143 +0000 UTC m=+158.118283655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.629682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.630055 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.130038059 +0000 UTC m=+158.118809571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.643416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.730880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.731128 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.231091171 +0000 UTC m=+158.219862683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.732041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.734556 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.234548806 +0000 UTC m=+158.223320308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.834430 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.834804 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.334768262 +0000 UTC m=+158.323539774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.935991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.936727 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.43670513 +0000 UTC m=+158.425476642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.037445 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:23 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:23 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:23 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.037530 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.037816 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.041736 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.538156923 +0000 UTC m=+158.526928425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.138987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.139567 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.639553494 +0000 UTC m=+158.628325006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.241406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.241670 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.741635747 +0000 UTC m=+158.730407259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.242035 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.242425 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.74241246 +0000 UTC m=+158.731183972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.308414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.344000 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.344236 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.844194524 +0000 UTC m=+158.832966036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.344492 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.344865 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.844857334 +0000 UTC m=+158.833628846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.445375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.445857 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.945808652 +0000 UTC m=+158.934580164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.547268 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.547735 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.047722779 +0000 UTC m=+159.036494291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.558817 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.628691 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"8209f96ec6a0c7644736fbf68690c1feddeb7374c6997cbc745c5fca5e82f7b0"} Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.629582 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rzprv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.629624 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.649377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.649765 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.14973749 +0000 UTC m=+159.138509002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.671647 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.672572 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.675064 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.691918 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.751148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.753290 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.253277947 +0000 UTC m=+159.242049459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.853905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.854084 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.354052959 +0000 UTC m=+159.342824471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854164 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854230 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.854795 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.354786592 +0000 UTC m=+159.343558104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.859488 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.860481 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.863490 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.878470 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.955603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.955791 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.45575865 +0000 UTC m=+159.444530152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.956986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957088 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957664 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957812 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.958127 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.458115542 +0000 UTC m=+159.446887054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958410 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.991896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.994479 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.030229 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:24 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:24 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:24 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.030470 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.062829 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.063500 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.063668 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.563637869 +0000 UTC m=+159.552409381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.064672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.064760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065227 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.065357 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.565347141 +0000 UTC m=+159.554118653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065365 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.076936 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.094841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166091 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.166314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.666281958 +0000 UTC m=+159.655053470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.167159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.167448 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.667412463 +0000 UTC m=+159.656184015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.190506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.272377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.273108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.273140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.273162 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.274106 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.774086945 +0000 UTC m=+159.762858457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.280550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.286564 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.296769 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.297891 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.329244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.355858 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.364240 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.364399 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.380806 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387586 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387915 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.388334 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.888315006 +0000 UTC m=+159.877086518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.404217 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.450201 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488655 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488906 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.489014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.489038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.489139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.989122 +0000 UTC m=+159.977893512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.489205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.547351 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.561398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593813 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593905 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593936 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593981 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.596235 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.596717 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.597115 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.097088891 +0000 UTC m=+160.085860593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.639231 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.658876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerStarted","Data":"de4523d60ec6d9e2d26e56021a3eb00ace652444b6905d5221950bbb9e65363d"} Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.682254 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"70a7a78ecb17a11e0094f2d7a84d2a15dbb03fc29b6a22c882bfb4eb0c933bf5"} Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.688635 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.695579 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.695892 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.195873714 +0000 UTC m=+160.184645226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.797152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.798448 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.298432981 +0000 UTC m=+160.287204493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.802408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.847165 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.899282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.899650 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.399618926 +0000 UTC m=+160.388390438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.001215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.002199 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.502166022 +0000 UTC m=+160.490937534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.048445 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:25 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:25 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:25 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.048500 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.102459 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.102756 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.602738049 +0000 UTC m=+160.591509561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.193998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.204131 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.204532 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.704516362 +0000 UTC m=+160.693287874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.262580 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.305277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.305507 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.805488921 +0000 UTC m=+160.794260433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.327252 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.406815 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.407273 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.907247253 +0000 UTC m=+160.896018765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.508658 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.509112 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.009091649 +0000 UTC m=+160.997863161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.609940 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.610472 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.110443278 +0000 UTC m=+161.099214830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.688888 4793 generic.go:334] "Generic (PLEG): container finished" podID="5212b17b-4423-4662-b026-88d37b8e6780" containerID="cc61868d059ab224be57eb0623ed03f00405cf2898a129989b50ff353c392019" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.688965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerDied","Data":"cc61868d059ab224be57eb0623ed03f00405cf2898a129989b50ff353c392019"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.693737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"08b1b0144f5aa5f7e37a6ac60796580a2bde9bcfed0b0717c28bc93d5c267506"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.695613 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerID="c998dd75c98b69795ee5fb5dadc61e23eb1a709df4bc6c6589c9107c4d8626e1" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.695688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"c998dd75c98b69795ee5fb5dadc61e23eb1a709df4bc6c6589c9107c4d8626e1"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.695714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerStarted","Data":"b6c39e29a72b59338ca03d2810962928be44b01ddeee62784d5e0ce96eb5572d"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.697265 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerID="53ade5b98f6c11809a6f4eec9f04e9ed3dc37b0586623e2c12f5e703dca23ac1" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.697329 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"53ade5b98f6c11809a6f4eec9f04e9ed3dc37b0586623e2c12f5e703dca23ac1"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.698774 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.699014 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerStarted","Data":"cab4441579a7c1ed11b73de8bfb583604040881f6d7e93a577f73fe130acfc37"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.699051 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerStarted","Data":"6941af7ed8417ffd25e4f7451b69776ad3fcfcb6a2db30e20798d831a58cdcca"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.701075 4793 generic.go:334] "Generic (PLEG): container finished" podID="ca21a12c-d4c3-414b-816c-858756e16147" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.701170 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.701493 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerStarted","Data":"23aef63c8314502249bed5cdebc0d033c6303855f2021cfbe103e4220fb0c8cd"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.703313 4793 generic.go:334] "Generic (PLEG): container finished" podID="1745bb39-265a-4318-8ede-bd919dc967d0" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.703354 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.703375 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerStarted","Data":"2d7d770e8ad7e7019ebe6a83e24e325ddc42f15802e0100dce38b289197d532e"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.710688 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.710902 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.21087327 +0000 UTC m=+161.199644782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.754474 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" podStartSLOduration=11.754453775 podStartE2EDuration="11.754453775s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:25.753828756 +0000 UTC m=+160.742600278" watchObservedRunningTime="2026-01-26 22:42:25.754453775 +0000 UTC m=+160.743225287" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.770924 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.770898314 podStartE2EDuration="1.770898314s" podCreationTimestamp="2026-01-26 22:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:25.770821822 +0000 UTC m=+160.759593334" watchObservedRunningTime="2026-01-26 22:42:25.770898314 +0000 UTC m=+160.759669836" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.821606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.822543 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.322515443 +0000 UTC m=+161.311287045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.823808 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.824818 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.835740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.860415 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.860442 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.870627 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.872180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.874221 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.886113 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.922962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923222 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923276 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923404 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923422 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.923447 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.42342012 +0000 UTC m=+161.412191632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.923801 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.423795111 +0000 UTC m=+161.412566623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.021682 4793 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.024833 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.025078 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.525045388 +0000 UTC m=+161.513816900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025269 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025343 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.026068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.034457 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:26 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:26 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:26 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.034554 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.050937 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.058028 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.127290 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.127825 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.627802871 +0000 UTC m=+161.616574383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.180628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.195788 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.229568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.230506 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.730474642 +0000 UTC m=+161.719246194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.272827 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.274763 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.279848 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.334877 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.335304 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.335331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.335403 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.335810 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.835797103 +0000 UTC m=+161.824568615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.436722 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.436968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.437066 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.437106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.438282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.438372 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.938351489 +0000 UTC m=+161.927123011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.439081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.465373 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.486507 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.537894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.538241 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.038227575 +0000 UTC m=+162.026999087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.539688 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:42:26 crc kubenswrapper[4793]: W0126 22:42:26.563959 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ece571b_df1f_4605_8127_b71fb41d2189.slice/crio-ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678 WatchSource:0}: Error finding container ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678: Status 404 returned error can't find the container with id ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678 Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.606195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.639061 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.639277 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.139247405 +0000 UTC m=+162.128018907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.639544 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.639935 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.139927146 +0000 UTC m=+162.128698658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.721601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.726717 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.727460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerStarted","Data":"ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678"} Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.729478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerStarted","Data":"ef6bbd118a6dfe07628822654d8a06a28a7b19157867d05ffe1aa1bb676b3c1d"} Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.736623 4793 generic.go:334] "Generic (PLEG): container finished" podID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerID="cab4441579a7c1ed11b73de8bfb583604040881f6d7e93a577f73fe130acfc37" exitCode=0 Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.736804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerDied","Data":"cab4441579a7c1ed11b73de8bfb583604040881f6d7e93a577f73fe130acfc37"} Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.740557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.740781 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.24074293 +0000 UTC m=+162.229514442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.740961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.741842 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.241830383 +0000 UTC m=+162.230601905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.845511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.847840 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.347813294 +0000 UTC m=+162.336584866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.873240 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.875739 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.879014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.921537 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.949059 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.44904392 +0000 UTC m=+162.437815432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.996086 4793 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T22:42:26.021721057Z","Handler":null,"Name":""} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.031882 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:27 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:27 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:27 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.031937 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.043594 4793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.043655 4793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050527 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050602 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050628 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.051038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.051563 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.054253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.083901 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.134048 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.151291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.152749 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.173557 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.173596 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.191728 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.222284 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.250032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.251948 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"5212b17b-4423-4662-b026-88d37b8e6780\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252112 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"5212b17b-4423-4662-b026-88d37b8e6780\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252133 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"5212b17b-4423-4662-b026-88d37b8e6780\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252324 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:42:27 crc kubenswrapper[4793]: E0126 22:42:27.252615 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5212b17b-4423-4662-b026-88d37b8e6780" containerName="collect-profiles" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252634 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5212b17b-4423-4662-b026-88d37b8e6780" containerName="collect-profiles" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252768 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5212b17b-4423-4662-b026-88d37b8e6780" containerName="collect-profiles" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.254091 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume" (OuterVolumeSpecName: "config-volume") pod "5212b17b-4423-4662-b026-88d37b8e6780" (UID: "5212b17b-4423-4662-b026-88d37b8e6780"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.255270 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.257050 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.260163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.262497 4793 patch_prober.go:28] interesting pod/console-f9d7485db-57ccj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.262570 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-57ccj" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.283665 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.283829 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4" (OuterVolumeSpecName: "kube-api-access-m9mq4") pod "5212b17b-4423-4662-b026-88d37b8e6780" (UID: "5212b17b-4423-4662-b026-88d37b8e6780"). InnerVolumeSpecName "kube-api-access-m9mq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.284638 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5212b17b-4423-4662-b026-88d37b8e6780" (UID: "5212b17b-4423-4662-b026-88d37b8e6780"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.297058 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305809 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305826 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305853 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305881 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355152 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355323 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355622 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355643 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355652 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.456968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.457059 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.457081 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.457542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.458043 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.479978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.506600 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.540444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.556826 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.598673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.809638 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.851596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerDied","Data":"f22596edcefced17885ef893f7cee49907f5d9cebcaf8c8793ed7795985ac763"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.851644 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22596edcefced17885ef893f7cee49907f5d9cebcaf8c8793ed7795985ac763" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.851724 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.862405 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ece571b-df1f-4605-8127-b71fb41d2189" containerID="df91a57438b106203c49f3ea641745b85b7989ad66fa5fea056e5bed77508967" exitCode=0 Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.862487 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"df91a57438b106203c49f3ea641745b85b7989ad66fa5fea056e5bed77508967"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.877739 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.958612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerStarted","Data":"16af8ddfe563636e3746dd358753066c3b4429f75246547371168f83e1f97609"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.986173 4793 generic.go:334] "Generic (PLEG): container finished" podID="99c0251d-b287-4c00-a392-c43b8164e73d" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" exitCode=0 Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.987478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.987526 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerStarted","Data":"18efb665e88f94cc0c7aef3bce9135f6a8b698b34f0bd84750c1350537a84b4c"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.987539 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.039142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.042871 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:28 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:28 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:28 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.042931 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:28 crc kubenswrapper[4793]: W0126 22:42:28.066681 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceed8696_7889_4e56_b430_dc4a6d46e1e6.slice/crio-4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db WatchSource:0}: Error finding container 4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db: Status 404 returned error can't find the container with id 4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.144443 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.156366 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:28 crc kubenswrapper[4793]: W0126 22:42:28.220828 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fbd096a_9989_4aa8_8c4d_e77ca47aee86.slice/crio-21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c WatchSource:0}: Error finding container 21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c: Status 404 returned error can't find the container with id 21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.455823 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.581726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.581828 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.582357 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "65214f74-e7f7-41fa-b0f4-3efa90a955a9" (UID: "65214f74-e7f7-41fa-b0f4-3efa90a955a9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.593467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "65214f74-e7f7-41fa-b0f4-3efa90a955a9" (UID: "65214f74-e7f7-41fa-b0f4-3efa90a955a9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.683158 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.683420 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.048509 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:29 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:29 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:29 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.048582 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.085584 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerID="43bfb5939bd86706aa081141535e22be807fc9c0577dcd35f30a1721eb3de604" exitCode=0 Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.085692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"43bfb5939bd86706aa081141535e22be807fc9c0577dcd35f30a1721eb3de604"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.085724 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerStarted","Data":"ab28d055e5e6d116639cc20c8075ccee993ec8ea1d3f84c9e75e2020b4dbfa88"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.103705 4793 generic.go:334] "Generic (PLEG): container finished" podID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerID="16af8ddfe563636e3746dd358753066c3b4429f75246547371168f83e1f97609" exitCode=0 Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.103818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerDied","Data":"16af8ddfe563636e3746dd358753066c3b4429f75246547371168f83e1f97609"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.127787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerDied","Data":"6941af7ed8417ffd25e4f7451b69776ad3fcfcb6a2db30e20798d831a58cdcca"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.127856 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6941af7ed8417ffd25e4f7451b69776ad3fcfcb6a2db30e20798d831a58cdcca" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.127920 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.147094 4793 generic.go:334] "Generic (PLEG): container finished" podID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" exitCode=0 Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.147210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.147249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerStarted","Data":"21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.175995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerStarted","Data":"3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.176057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerStarted","Data":"4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.177432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.226756 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" podStartSLOduration=137.226725002 podStartE2EDuration="2m17.226725002s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:29.213784679 +0000 UTC m=+164.202556191" watchObservedRunningTime="2026-01-26 22:42:29.226725002 +0000 UTC m=+164.215496514" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.456852 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.606585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.606676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.606733 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "810b861e-ca2f-4912-a6a3-1cbfa7bb7804" (UID: "810b861e-ca2f-4912-a6a3-1cbfa7bb7804"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.607095 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.614292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "810b861e-ca2f-4912-a6a3-1cbfa7bb7804" (UID: "810b861e-ca2f-4912-a6a3-1cbfa7bb7804"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.710243 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.030985 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:30 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:30 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:30 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.031048 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.278058 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.278039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerDied","Data":"ef6bbd118a6dfe07628822654d8a06a28a7b19157867d05ffe1aa1bb676b3c1d"} Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.278119 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6bbd118a6dfe07628822654d8a06a28a7b19157867d05ffe1aa1bb676b3c1d" Jan 26 22:42:31 crc kubenswrapper[4793]: I0126 22:42:31.035565 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:31 crc kubenswrapper[4793]: I0126 22:42:31.042742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:32 crc kubenswrapper[4793]: I0126 22:42:32.868889 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:34 crc kubenswrapper[4793]: I0126 22:42:34.805634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:34 crc kubenswrapper[4793]: I0126 22:42:34.833029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:34 crc kubenswrapper[4793]: I0126 22:42:34.914538 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:37 crc kubenswrapper[4793]: I0126 22:42:37.275495 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:37 crc kubenswrapper[4793]: I0126 22:42:37.279043 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:37 crc kubenswrapper[4793]: I0126 22:42:37.322777 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.378262 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.379440 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" containerID="cri-o://536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070" gracePeriod=30 Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.392214 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.392540 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" containerID="cri-o://96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31" gracePeriod=30 Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.461208 4793 generic.go:334] "Generic (PLEG): container finished" podID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerID="96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31" exitCode=0 Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.461282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerDied","Data":"96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31"} Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.464632 4793 generic.go:334] "Generic (PLEG): container finished" podID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerID="536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070" exitCode=0 Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.464691 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerDied","Data":"536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070"} Jan 26 22:42:46 crc kubenswrapper[4793]: I0126 22:42:46.778290 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r5m9x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 22:42:46 crc kubenswrapper[4793]: I0126 22:42:46.778761 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 22:42:47 crc kubenswrapper[4793]: I0126 22:42:47.178337 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:47 crc kubenswrapper[4793]: I0126 22:42:47.178455 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:47 crc kubenswrapper[4793]: I0126 22:42:47.306608 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:48 crc kubenswrapper[4793]: I0126 22:42:48.323307 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:42:48 crc kubenswrapper[4793]: I0126 22:42:48.323936 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:42:53 crc kubenswrapper[4793]: E0126 22:42:53.131534 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 22:42:53 crc kubenswrapper[4793]: E0126 22:42:53.132126 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lhnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rmxpm_openshift-marketplace(1ea0b73a-1820-4411-bbbf-acd3d22899e0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:53 crc kubenswrapper[4793]: E0126 22:42:53.133384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rmxpm" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" Jan 26 22:42:53 crc kubenswrapper[4793]: I0126 22:42:53.633995 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.683596 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rmxpm" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.786294 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.786498 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s2bff_openshift-marketplace(b0a722d2-056a-4bf2-a33c-719ee8aba7a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.787712 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.832112 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.832369 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpdhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xlnl6_openshift-marketplace(99c0251d-b287-4c00-a392-c43b8164e73d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.833572 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xlnl6" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" Jan 26 22:42:57 crc kubenswrapper[4793]: I0126 22:42:57.777154 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r5m9x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 22:42:57 crc kubenswrapper[4793]: I0126 22:42:57.777656 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.137944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.177973 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.178061 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.267849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xlnl6" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.267930 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.332001 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.332183 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tf6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-78ml2_openshift-marketplace(ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.334637 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-78ml2" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.416867 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.422491 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467605 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467885 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467898 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467913 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467919 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467931 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467937 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467947 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467953 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468090 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468104 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468115 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468125 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488046 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488241 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488260 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488281 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488360 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488386 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488408 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488432 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488629 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488684 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488721 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.489544 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.491733 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca" (OuterVolumeSpecName: "client-ca") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.492368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.492595 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config" (OuterVolumeSpecName: "config") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.492842 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config" (OuterVolumeSpecName: "config") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.499804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr" (OuterVolumeSpecName: "kube-api-access-fmqdr") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "kube-api-access-fmqdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.504011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.517411 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk" (OuterVolumeSpecName: "kube-api-access-p74mk") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "kube-api-access-p74mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.524594 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.529298 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.529438 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8sb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nksn9_openshift-marketplace(ca21a12c-d4c3-414b-816c-858756e16147): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.530813 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nksn9" podUID="ca21a12c-d4c3-414b-816c-858756e16147" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.535468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.571678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerDied","Data":"4ef809a2826a84fc4c6df7b47339752d4935d8cadd1765fb919c6712c00c442f"} Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.571762 4793 scope.go:117] "RemoveContainer" containerID="536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.571779 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.576955 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.581403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerDied","Data":"35ef9fd4d9f1a8c21342432b8e94e83c86e086b0ceab6b71120d185cc8b1ccf3"} Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590134 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590324 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590334 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590343 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590372 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590381 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590389 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590398 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590406 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590414 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.591722 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.594081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.599101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.601170 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-78ml2" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.601292 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nksn9" podUID="ca21a12c-d4c3-414b-816c-858756e16147" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.601662 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.610800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.615660 4793 scope.go:117] "RemoveContainer" containerID="96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.635590 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7rl9w"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.643472 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.645678 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.654967 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.655023 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:58 crc kubenswrapper[4793]: W0126 22:42:58.658934 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2174e8d7_3f7f_4e0e_b17e_1f0053a52ccc.slice/crio-5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3 WatchSource:0}: Error finding container 5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3: Status 404 returned error can't find the container with id 5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3 Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.790367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.057656 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:42:59 crc kubenswrapper[4793]: W0126 22:42:59.082754 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c1b812_ff9c_440b_ace5_60706632e1e0.slice/crio-3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc WatchSource:0}: Error finding container 3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc: Status 404 returned error can't find the container with id 3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.596549 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ece571b-df1f-4605-8127-b71fb41d2189" containerID="9deb6cc715d13e34300108f710dc2b32d0af0395f968c159a45df8f3c1fa5d0c" exitCode=0 Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.596629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"9deb6cc715d13e34300108f710dc2b32d0af0395f968c159a45df8f3c1fa5d0c"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.599823 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerStarted","Data":"4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.599879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerStarted","Data":"3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.600176 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.604341 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" event={"ID":"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc","Type":"ContainerStarted","Data":"fb2c9fd91420503e13b5501b06b30c9ba2287c596abc2e41929c464cc9b9de23"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.604374 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" event={"ID":"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc","Type":"ContainerStarted","Data":"2243204da7184719e2d6f61ad610cf88e055053f094f80b696d3f43a64d755e6"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.604384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" event={"ID":"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc","Type":"ContainerStarted","Data":"5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.606276 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.606632 4793 generic.go:334] "Generic (PLEG): container finished" podID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" exitCode=0 Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.606693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.610897 4793 generic.go:334] "Generic (PLEG): container finished" podID="1745bb39-265a-4318-8ede-bd919dc967d0" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" exitCode=0 Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.610936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.677801 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7rl9w" podStartSLOduration=167.677784292 podStartE2EDuration="2m47.677784292s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:59.674581195 +0000 UTC m=+194.663352707" watchObservedRunningTime="2026-01-26 22:42:59.677784292 +0000 UTC m=+194.666555804" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.690744 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" podStartSLOduration=17.690725396 podStartE2EDuration="17.690725396s" podCreationTimestamp="2026-01-26 22:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:59.690081176 +0000 UTC m=+194.678852688" watchObservedRunningTime="2026-01-26 22:42:59.690725396 +0000 UTC m=+194.679496908" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.766838 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" path="/var/lib/kubelet/pods/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a/volumes" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.767623 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" path="/var/lib/kubelet/pods/58669a0f-eecb-49dd-9637-af4dc30cd20d/volumes" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.470936 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.472520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.476275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.481922 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.482478 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.482879 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.483932 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.484135 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.491412 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519786 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620779 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620866 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620893 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620933 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.622181 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.622458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.622514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerStarted","Data":"b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052"} Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.625490 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerStarted","Data":"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7"} Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.630839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerStarted","Data":"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878"} Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.632252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.643567 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.645766 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s7s8k" podStartSLOduration=3.49522839 podStartE2EDuration="35.645750339s" podCreationTimestamp="2026-01-26 22:42:25 +0000 UTC" firstStartedPulling="2026-01-26 22:42:27.911607674 +0000 UTC m=+162.900379186" lastFinishedPulling="2026-01-26 22:43:00.062129613 +0000 UTC m=+195.050901135" observedRunningTime="2026-01-26 22:43:00.64196756 +0000 UTC m=+195.630739072" watchObservedRunningTime="2026-01-26 22:43:00.645750339 +0000 UTC m=+195.634521841" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.668063 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwxx5" podStartSLOduration=2.375496819 podStartE2EDuration="36.668042058s" podCreationTimestamp="2026-01-26 22:42:24 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.705071574 +0000 UTC m=+160.693843086" lastFinishedPulling="2026-01-26 22:42:59.997616813 +0000 UTC m=+194.986388325" observedRunningTime="2026-01-26 22:43:00.663820765 +0000 UTC m=+195.652592297" watchObservedRunningTime="2026-01-26 22:43:00.668042058 +0000 UTC m=+195.656813570" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.684574 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ptb4m" podStartSLOduration=2.594952834 podStartE2EDuration="33.684556641s" podCreationTimestamp="2026-01-26 22:42:27 +0000 UTC" firstStartedPulling="2026-01-26 22:42:29.149943278 +0000 UTC m=+164.138714790" lastFinishedPulling="2026-01-26 22:43:00.239547085 +0000 UTC m=+195.228318597" observedRunningTime="2026-01-26 22:43:00.68217353 +0000 UTC m=+195.670945042" watchObservedRunningTime="2026-01-26 22:43:00.684556641 +0000 UTC m=+195.673328153" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.798779 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.048463 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.657489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerStarted","Data":"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2"} Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.658036 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerStarted","Data":"07b1ed7ba1378af9b686711c77d9f23a688c635a72bae5bc43266845b84ec09a"} Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.659684 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.681543 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.682107 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" podStartSLOduration=19.682085493 podStartE2EDuration="19.682085493s" podCreationTimestamp="2026-01-26 22:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:01.677795196 +0000 UTC m=+196.666566718" watchObservedRunningTime="2026-01-26 22:43:01.682085493 +0000 UTC m=+196.670857005" Jan 26 22:43:02 crc kubenswrapper[4793]: I0126 22:43:02.341340 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:43:02 crc kubenswrapper[4793]: I0126 22:43:02.427100 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:02 crc kubenswrapper[4793]: I0126 22:43:02.662339 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" containerID="cri-o://4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99" gracePeriod=30 Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.668610 4793 generic.go:334] "Generic (PLEG): container finished" podID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerID="4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99" exitCode=0 Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.668691 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerDied","Data":"4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99"} Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.669026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerDied","Data":"3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc"} Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.669047 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.669134 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" containerID="cri-o://8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" gracePeriod=30 Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.685578 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.714458 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:03 crc kubenswrapper[4793]: E0126 22:43:03.714747 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.714767 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.714891 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.715379 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.726465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775271 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775466 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775558 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775609 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775688 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.777992 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.778432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config" (OuterVolumeSpecName: "config") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.778487 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.786641 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs" (OuterVolumeSpecName: "kube-api-access-87fjs") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "kube-api-access-87fjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.786906 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909287 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909418 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909477 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909487 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909496 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909508 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909516 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.910511 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.911095 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.911985 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.915515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.934217 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.117840 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.129378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215119 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215247 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215408 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.216366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.216617 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config" (OuterVolumeSpecName: "config") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.218764 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.221415 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv" (OuterVolumeSpecName: "kube-api-access-ftrlv") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "kube-api-access-ftrlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.317558 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.320344 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.320393 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.320467 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.345998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676374 4793 generic.go:334] "Generic (PLEG): container finished" podID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" exitCode=0 Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676442 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerDied","Data":"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2"} Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676581 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerDied","Data":"07b1ed7ba1378af9b686711c77d9f23a688c635a72bae5bc43266845b84ec09a"} Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676616 4793 scope.go:117] "RemoveContainer" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.690056 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerStarted","Data":"45b69ac25c01759287e2d0a5eea45d0fa774df75dbd92ec0b24b338a87d0949d"} Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.690096 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.700233 4793 scope.go:117] "RemoveContainer" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" Jan 26 22:43:04 crc kubenswrapper[4793]: E0126 22:43:04.704317 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2\": container with ID starting with 8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2 not found: ID does not exist" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.704366 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2"} err="failed to get container status \"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2\": rpc error: code = NotFound desc = could not find container \"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2\": container with ID starting with 8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2 not found: ID does not exist" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.730882 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.735059 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.737735 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.740809 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.803610 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.803682 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.014016 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.698833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerStarted","Data":"c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f"} Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.700743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.706029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.725897 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" podStartSLOduration=3.72587738 podStartE2EDuration="3.72587738s" podCreationTimestamp="2026-01-26 22:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:05.72001635 +0000 UTC m=+200.708787882" watchObservedRunningTime="2026-01-26 22:43:05.72587738 +0000 UTC m=+200.714648892" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.756406 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.770741 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" path="/var/lib/kubelet/pods/55c1b812-ff9c-440b-ace5-60706632e1e0/volumes" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.771742 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" path="/var/lib/kubelet/pods/5d50b289-073a-4890-ab14-8ce2138b21b6/volumes" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.196115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.196233 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.242899 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.466281 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:06 crc kubenswrapper[4793]: E0126 22:43:06.466561 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.466584 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.466779 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.467395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.473698 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.473835 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.473886 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.474034 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.474067 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.474364 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.480836 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.548532 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.549151 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.549182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.549229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650488 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650633 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.651828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.653024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.658405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.671637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.746560 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.784014 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.166905 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.201298 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:07 crc kubenswrapper[4793]: W0126 22:43:07.208742 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12030821_2df1_4742_8879_2885ea6315da.slice/crio-59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9 WatchSource:0}: Error finding container 59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9: Status 404 returned error can't find the container with id 59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9 Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.599413 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.599484 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.671344 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerStarted","Data":"53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853"} Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710281 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerStarted","Data":"59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9"} Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710588 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710892 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dwxx5" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" containerID="cri-o://2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" gracePeriod=2 Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.738275 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" podStartSLOduration=5.738252825 podStartE2EDuration="5.738252825s" podCreationTimestamp="2026-01-26 22:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:07.734538259 +0000 UTC m=+202.723309771" watchObservedRunningTime="2026-01-26 22:43:07.738252825 +0000 UTC m=+202.727024337" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.751456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.213846 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.215122 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.217874 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.218262 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.226020 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.271419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.279475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.279547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.332349 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.380594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.380673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.380745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.402634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.482067 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"1745bb39-265a-4318-8ede-bd919dc967d0\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.482538 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"1745bb39-265a-4318-8ede-bd919dc967d0\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.482782 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"1745bb39-265a-4318-8ede-bd919dc967d0\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.483411 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities" (OuterVolumeSpecName: "utilities") pod "1745bb39-265a-4318-8ede-bd919dc967d0" (UID: "1745bb39-265a-4318-8ede-bd919dc967d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.485216 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.493322 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx" (OuterVolumeSpecName: "kube-api-access-4xlqx") pod "1745bb39-265a-4318-8ede-bd919dc967d0" (UID: "1745bb39-265a-4318-8ede-bd919dc967d0"). InnerVolumeSpecName "kube-api-access-4xlqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.533772 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.560309 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1745bb39-265a-4318-8ede-bd919dc967d0" (UID: "1745bb39-265a-4318-8ede-bd919dc967d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.586825 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.586874 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725305 4793 generic.go:334] "Generic (PLEG): container finished" podID="1745bb39-265a-4318-8ede-bd919dc967d0" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" exitCode=0 Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878"} Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"2d7d770e8ad7e7019ebe6a83e24e325ddc42f15802e0100dce38b289197d532e"} Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725979 4793 scope.go:117] "RemoveContainer" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725527 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.759296 4793 scope.go:117] "RemoveContainer" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.762234 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.772870 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.790969 4793 scope.go:117] "RemoveContainer" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.814653 4793 scope.go:117] "RemoveContainer" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" Jan 26 22:43:08 crc kubenswrapper[4793]: E0126 22:43:08.815294 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878\": container with ID starting with 2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878 not found: ID does not exist" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815344 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878"} err="failed to get container status \"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878\": rpc error: code = NotFound desc = could not find container \"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878\": container with ID starting with 2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878 not found: ID does not exist" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815383 4793 scope.go:117] "RemoveContainer" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" Jan 26 22:43:08 crc kubenswrapper[4793]: E0126 22:43:08.815774 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065\": container with ID starting with bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065 not found: ID does not exist" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065"} err="failed to get container status \"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065\": rpc error: code = NotFound desc = could not find container \"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065\": container with ID starting with bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065 not found: ID does not exist" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815827 4793 scope.go:117] "RemoveContainer" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" Jan 26 22:43:08 crc kubenswrapper[4793]: E0126 22:43:08.816076 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d\": container with ID starting with 4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d not found: ID does not exist" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.816093 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d"} err="failed to get container status \"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d\": rpc error: code = NotFound desc = could not find container \"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d\": container with ID starting with 4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d not found: ID does not exist" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.963245 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 22:43:08 crc kubenswrapper[4793]: W0126 22:43:08.972886 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod013dc545_6073_4075_833b_e3e50bf89413.slice/crio-3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb WatchSource:0}: Error finding container 3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb: Status 404 returned error can't find the container with id 3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.734939 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerStarted","Data":"259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.738512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerStarted","Data":"fcccf55a3af675341dd84535250133acc685d822aba196eb2b52bfe7ef72912c"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.738564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerStarted","Data":"3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.742670 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerID="f47964ab77003c1812c68563a6ab332512b80c93bca84785abf6954776142bb0" exitCode=0 Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.742741 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"f47964ab77003c1812c68563a6ab332512b80c93bca84785abf6954776142bb0"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.770435 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" path="/var/lib/kubelet/pods/1745bb39-265a-4318-8ede-bd919dc967d0/volumes" Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.795080 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.795055122 podStartE2EDuration="1.795055122s" podCreationTimestamp="2026-01-26 22:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:09.789260165 +0000 UTC m=+204.778031687" watchObservedRunningTime="2026-01-26 22:43:09.795055122 +0000 UTC m=+204.783826654" Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.568751 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.569647 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ptb4m" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" containerID="cri-o://50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" gracePeriod=2 Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.751701 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerStarted","Data":"55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4"} Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.754875 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerID="259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d" exitCode=0 Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.754992 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d"} Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.757915 4793 generic.go:334] "Generic (PLEG): container finished" podID="013dc545-6073-4075-833b-e3e50bf89413" containerID="fcccf55a3af675341dd84535250133acc685d822aba196eb2b52bfe7ef72912c" exitCode=0 Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.757982 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerDied","Data":"fcccf55a3af675341dd84535250133acc685d822aba196eb2b52bfe7ef72912c"} Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.775386 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmxpm" podStartSLOduration=3.337995533 podStartE2EDuration="47.775363378s" podCreationTimestamp="2026-01-26 22:42:23 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.698461333 +0000 UTC m=+160.687232845" lastFinishedPulling="2026-01-26 22:43:10.135829178 +0000 UTC m=+205.124600690" observedRunningTime="2026-01-26 22:43:10.770771412 +0000 UTC m=+205.759542944" watchObservedRunningTime="2026-01-26 22:43:10.775363378 +0000 UTC m=+205.764134910" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.718116 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.776064 4793 generic.go:334] "Generic (PLEG): container finished" podID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" exitCode=0 Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.776222 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerStarted","Data":"a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f"} Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7"} Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c"} Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778578 4793 scope.go:117] "RemoveContainer" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.805272 4793 scope.go:117] "RemoveContainer" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.828731 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2bff" podStartSLOduration=3.458546559 podStartE2EDuration="45.828692921s" podCreationTimestamp="2026-01-26 22:42:26 +0000 UTC" firstStartedPulling="2026-01-26 22:42:29.087149991 +0000 UTC m=+164.075921503" lastFinishedPulling="2026-01-26 22:43:11.457296353 +0000 UTC m=+206.446067865" observedRunningTime="2026-01-26 22:43:11.827659906 +0000 UTC m=+206.816431418" watchObservedRunningTime="2026-01-26 22:43:11.828692921 +0000 UTC m=+206.817464433" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.838458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.838624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.838649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.839904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities" (OuterVolumeSpecName: "utilities") pod "2fbd096a-9989-4aa8-8c4d-e77ca47aee86" (UID: "2fbd096a-9989-4aa8-8c4d-e77ca47aee86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.847254 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt" (OuterVolumeSpecName: "kube-api-access-jlftt") pod "2fbd096a-9989-4aa8-8c4d-e77ca47aee86" (UID: "2fbd096a-9989-4aa8-8c4d-e77ca47aee86"). InnerVolumeSpecName "kube-api-access-jlftt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.855748 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.855785 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.858002 4793 scope.go:117] "RemoveContainer" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.897885 4793 scope.go:117] "RemoveContainer" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" Jan 26 22:43:11 crc kubenswrapper[4793]: E0126 22:43:11.913799 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7\": container with ID starting with 50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7 not found: ID does not exist" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.913905 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7"} err="failed to get container status \"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7\": rpc error: code = NotFound desc = could not find container \"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7\": container with ID starting with 50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7 not found: ID does not exist" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.914354 4793 scope.go:117] "RemoveContainer" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" Jan 26 22:43:11 crc kubenswrapper[4793]: E0126 22:43:11.920035 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d\": container with ID starting with 1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d not found: ID does not exist" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.920106 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d"} err="failed to get container status \"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d\": rpc error: code = NotFound desc = could not find container \"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d\": container with ID starting with 1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d not found: ID does not exist" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.920149 4793 scope.go:117] "RemoveContainer" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" Jan 26 22:43:11 crc kubenswrapper[4793]: E0126 22:43:11.920955 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec\": container with ID starting with b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec not found: ID does not exist" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.920979 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec"} err="failed to get container status \"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec\": rpc error: code = NotFound desc = could not find container \"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec\": container with ID starting with b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec not found: ID does not exist" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.978284 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fbd096a-9989-4aa8-8c4d-e77ca47aee86" (UID: "2fbd096a-9989-4aa8-8c4d-e77ca47aee86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.058938 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.146443 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.150098 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.156612 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261050 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"013dc545-6073-4075-833b-e3e50bf89413\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"013dc545-6073-4075-833b-e3e50bf89413\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "013dc545-6073-4075-833b-e3e50bf89413" (UID: "013dc545-6073-4075-833b-e3e50bf89413"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261640 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.267133 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "013dc545-6073-4075-833b-e3e50bf89413" (UID: "013dc545-6073-4075-833b-e3e50bf89413"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.363151 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.783366 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerStarted","Data":"a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139"} Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.785142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerStarted","Data":"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857"} Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.787033 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.787018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerDied","Data":"3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb"} Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.787456 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb" Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.772391 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" path="/var/lib/kubelet/pods/2fbd096a-9989-4aa8-8c4d-e77ca47aee86/volumes" Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.795164 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerID="a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139" exitCode=0 Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.796142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139"} Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.800301 4793 generic.go:334] "Generic (PLEG): container finished" podID="ca21a12c-d4c3-414b-816c-858756e16147" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" exitCode=0 Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.800437 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857"} Jan 26 22:43:14 crc kubenswrapper[4793]: I0126 22:43:14.192383 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:14 crc kubenswrapper[4793]: I0126 22:43:14.192460 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:14 crc kubenswrapper[4793]: I0126 22:43:14.251272 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.211906 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214043 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214086 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214105 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214120 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214137 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214152 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214225 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214242 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214265 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214282 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214305 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214322 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214348 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013dc545-6073-4075-833b-e3e50bf89413" containerName="pruner" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214364 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="013dc545-6073-4075-833b-e3e50bf89413" containerName="pruner" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214572 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214600 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214627 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="013dc545-6073-4075-833b-e3e50bf89413" containerName="pruner" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.215526 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.218869 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.221547 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.237465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.320320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.320392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.320642 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.421890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.421960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.422011 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.422074 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.422136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.465145 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.552994 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.820066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerStarted","Data":"bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c"} Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.821658 4793 generic.go:334] "Generic (PLEG): container finished" podID="99c0251d-b287-4c00-a392-c43b8164e73d" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" exitCode=0 Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.821688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88"} Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.223357 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.224014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.781073 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.831242 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerStarted","Data":"86d64a9898e0bc215d83fa5f31fbdb9e7c7bf15338d21b31a5a1c977a3cf778e"} Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.854514 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-78ml2" podStartSLOduration=4.592942649 podStartE2EDuration="54.854495608s" podCreationTimestamp="2026-01-26 22:42:23 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.699545646 +0000 UTC m=+160.688317198" lastFinishedPulling="2026-01-26 22:43:15.961098605 +0000 UTC m=+210.949870157" observedRunningTime="2026-01-26 22:43:17.853497164 +0000 UTC m=+212.842268686" watchObservedRunningTime="2026-01-26 22:43:17.854495608 +0000 UTC m=+212.843267130" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.269536 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" probeResult="failure" output=< Jan 26 22:43:18 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 26 22:43:18 crc kubenswrapper[4793]: > Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.322418 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.322483 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.322530 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.323075 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.323135 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944" gracePeriod=600 Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.841091 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerStarted","Data":"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.844936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerStarted","Data":"ae86aae5bad8ac03faa82ea1252a93b8958791a84a7892d11dbd7cc6c74b423c"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.847793 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944" exitCode=0 Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.847864 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.850345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerStarted","Data":"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.875573 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nksn9" podStartSLOduration=3.254315015 podStartE2EDuration="54.875550052s" podCreationTimestamp="2026-01-26 22:42:24 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.702671611 +0000 UTC m=+160.691443123" lastFinishedPulling="2026-01-26 22:43:17.323906648 +0000 UTC m=+212.312678160" observedRunningTime="2026-01-26 22:43:18.873031856 +0000 UTC m=+213.861803388" watchObservedRunningTime="2026-01-26 22:43:18.875550052 +0000 UTC m=+213.864321574" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.901604 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.901576038 podStartE2EDuration="2.901576038s" podCreationTimestamp="2026-01-26 22:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:18.897404816 +0000 UTC m=+213.886176408" watchObservedRunningTime="2026-01-26 22:43:18.901576038 +0000 UTC m=+213.890347590" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.921959 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xlnl6" podStartSLOduration=2.340758543 podStartE2EDuration="52.921937751s" podCreationTimestamp="2026-01-26 22:42:26 +0000 UTC" firstStartedPulling="2026-01-26 22:42:28.085265572 +0000 UTC m=+163.074037074" lastFinishedPulling="2026-01-26 22:43:18.66644473 +0000 UTC m=+213.655216282" observedRunningTime="2026-01-26 22:43:18.921521897 +0000 UTC m=+213.910293419" watchObservedRunningTime="2026-01-26 22:43:18.921937751 +0000 UTC m=+213.910709273" Jan 26 22:43:19 crc kubenswrapper[4793]: I0126 22:43:19.860825 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf"} Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.312744 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.314280 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" containerID="cri-o://c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f" gracePeriod=30 Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.330049 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.330562 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" containerID="cri-o://53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853" gracePeriod=30 Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.916823 4793 generic.go:334] "Generic (PLEG): container finished" podID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerID="c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f" exitCode=0 Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.916912 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerDied","Data":"c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f"} Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.920169 4793 generic.go:334] "Generic (PLEG): container finished" podID="12030821-2df1-4742-8879-2885ea6315da" containerID="53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853" exitCode=0 Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.920258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerDied","Data":"53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853"} Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.995453 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.995532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.020457 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.062261 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:24 crc kubenswrapper[4793]: E0126 22:43:24.062509 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.062526 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.062668 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.063436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.069982 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.097003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.141790 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176565 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176763 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.178151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config" (OuterVolumeSpecName: "config") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.178169 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca" (OuterVolumeSpecName: "client-ca") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.183793 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct" (OuterVolumeSpecName: "kube-api-access-sdwct") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "kube-api-access-sdwct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.183963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.232487 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278386 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278540 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278874 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278885 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278895 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278902 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.279628 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.279830 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.280269 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config" (OuterVolumeSpecName: "config") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.281506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.282124 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f" (OuterVolumeSpecName: "kube-api-access-ptx2f") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "kube-api-access-ptx2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380554 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380611 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380713 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380758 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380777 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380796 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380813 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.381956 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.382130 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.386890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.388665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.388722 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.399835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.429092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.443708 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.694347 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:24 crc kubenswrapper[4793]: W0126 22:43:24.706033 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc577754_9b71_4135_b5ca_3b2255b30af0.slice/crio-653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1 WatchSource:0}: Error finding container 653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1: Status 404 returned error can't find the container with id 653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1 Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.928427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerStarted","Data":"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.928485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerStarted","Data":"653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.928718 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.930585 4793 patch_prober.go:28] interesting pod/route-controller-manager-84b9f7f9fd-f8g4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.930701 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.932569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerDied","Data":"45b69ac25c01759287e2d0a5eea45d0fa774df75dbd92ec0b24b338a87d0949d"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.932609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.932701 4793 scope.go:117] "RemoveContainer" containerID="c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.935016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerDied","Data":"59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.935260 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.967144 4793 scope.go:117] "RemoveContainer" containerID="53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.987250 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" podStartSLOduration=2.987225833 podStartE2EDuration="2.987225833s" podCreationTimestamp="2026-01-26 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:24.957870383 +0000 UTC m=+219.946641895" watchObservedRunningTime="2026-01-26 22:43:24.987225833 +0000 UTC m=+219.975997355" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.992254 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.994867 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.005936 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.010122 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.010737 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.024250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.131630 4793 patch_prober.go:28] interesting pod/controller-manager-84c7c9c945-fvjfr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: i/o timeout" start-of-body= Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.131694 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: i/o timeout" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.599513 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.769014 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12030821-2df1-4742-8879-2885ea6315da" path="/var/lib/kubelet/pods/12030821-2df1-4742-8879-2885ea6315da/volumes" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.769678 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" path="/var/lib/kubelet/pods/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a/volumes" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.949424 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.968461 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.483765 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:26 crc kubenswrapper[4793]: E0126 22:43:26.484594 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.484617 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.484794 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.485789 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490066 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490806 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490891 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490892 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490897 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.491516 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.502821 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.504853 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.537828 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.537944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.538026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.538118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.538176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.607398 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.608520 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639324 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.641373 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.641524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.641583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.657967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.661823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.683157 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.846006 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.954244 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nksn9" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" containerID="cri-o://78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" gracePeriod=2 Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.003849 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.160402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.287864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.331454 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.341246 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.350273 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"ca21a12c-d4c3-414b-816c-858756e16147\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.412869 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca21a12c-d4c3-414b-816c-858756e16147" (UID: "ca21a12c-d4c3-414b-816c-858756e16147"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.451673 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"ca21a12c-d4c3-414b-816c-858756e16147\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.451734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"ca21a12c-d4c3-414b-816c-858756e16147\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.451999 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.452578 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities" (OuterVolumeSpecName: "utilities") pod "ca21a12c-d4c3-414b-816c-858756e16147" (UID: "ca21a12c-d4c3-414b-816c-858756e16147"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.457208 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6" (OuterVolumeSpecName: "kube-api-access-v8sb6") pod "ca21a12c-d4c3-414b-816c-858756e16147" (UID: "ca21a12c-d4c3-414b-816c-858756e16147"). InnerVolumeSpecName "kube-api-access-v8sb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.553841 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.554406 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.961449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerStarted","Data":"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.961529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerStarted","Data":"0b072c7b5c8d4bf3ec078448030b74f78bd05e8468581b61acdb9d6e7206815a"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.961710 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963649 4793 generic.go:334] "Generic (PLEG): container finished" podID="ca21a12c-d4c3-414b-816c-858756e16147" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" exitCode=0 Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963737 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963726 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963803 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"23aef63c8314502249bed5cdebc0d033c6303855f2021cfbe103e4220fb0c8cd"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963839 4793 scope.go:117] "RemoveContainer" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.971650 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.984382 4793 scope.go:117] "RemoveContainer" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.987018 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" podStartSLOduration=5.986994724 podStartE2EDuration="5.986994724s" podCreationTimestamp="2026-01-26 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:27.985849515 +0000 UTC m=+222.974621027" watchObservedRunningTime="2026-01-26 22:43:27.986994724 +0000 UTC m=+222.975766246" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.005982 4793 scope.go:117] "RemoveContainer" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.035484 4793 scope.go:117] "RemoveContainer" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" Jan 26 22:43:28 crc kubenswrapper[4793]: E0126 22:43:28.036036 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57\": container with ID starting with 78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57 not found: ID does not exist" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036069 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57"} err="failed to get container status \"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57\": rpc error: code = NotFound desc = could not find container \"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57\": container with ID starting with 78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57 not found: ID does not exist" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036093 4793 scope.go:117] "RemoveContainer" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" Jan 26 22:43:28 crc kubenswrapper[4793]: E0126 22:43:28.036592 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857\": container with ID starting with a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857 not found: ID does not exist" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036683 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857"} err="failed to get container status \"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857\": rpc error: code = NotFound desc = could not find container \"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857\": container with ID starting with a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857 not found: ID does not exist" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036723 4793 scope.go:117] "RemoveContainer" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" Jan 26 22:43:28 crc kubenswrapper[4793]: E0126 22:43:28.037334 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2\": container with ID starting with 3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2 not found: ID does not exist" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.037385 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2"} err="failed to get container status \"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2\": rpc error: code = NotFound desc = could not find container \"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2\": container with ID starting with 3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2 not found: ID does not exist" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.063277 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.070810 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:43:29 crc kubenswrapper[4793]: I0126 22:43:29.767885 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca21a12c-d4c3-414b-816c-858756e16147" path="/var/lib/kubelet/pods/ca21a12c-d4c3-414b-816c-858756e16147/volumes" Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.367725 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.368558 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xlnl6" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" containerID="cri-o://6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" gracePeriod=2 Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.847109 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987087 4793 generic.go:334] "Generic (PLEG): container finished" podID="99c0251d-b287-4c00-a392-c43b8164e73d" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" exitCode=0 Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987141 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98"} Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"18efb665e88f94cc0c7aef3bce9135f6a8b698b34f0bd84750c1350537a84b4c"} Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987216 4793 scope.go:117] "RemoveContainer" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987224 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.008636 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"99c0251d-b287-4c00-a392-c43b8164e73d\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.008730 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"99c0251d-b287-4c00-a392-c43b8164e73d\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.008801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"99c0251d-b287-4c00-a392-c43b8164e73d\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.010843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities" (OuterVolumeSpecName: "utilities") pod "99c0251d-b287-4c00-a392-c43b8164e73d" (UID: "99c0251d-b287-4c00-a392-c43b8164e73d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.015285 4793 scope.go:117] "RemoveContainer" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.018412 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr" (OuterVolumeSpecName: "kube-api-access-rpdhr") pod "99c0251d-b287-4c00-a392-c43b8164e73d" (UID: "99c0251d-b287-4c00-a392-c43b8164e73d"). InnerVolumeSpecName "kube-api-access-rpdhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.042280 4793 scope.go:117] "RemoveContainer" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.051752 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99c0251d-b287-4c00-a392-c43b8164e73d" (UID: "99c0251d-b287-4c00-a392-c43b8164e73d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.062692 4793 scope.go:117] "RemoveContainer" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" Jan 26 22:43:31 crc kubenswrapper[4793]: E0126 22:43:31.063475 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98\": container with ID starting with 6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98 not found: ID does not exist" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.063536 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98"} err="failed to get container status \"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98\": rpc error: code = NotFound desc = could not find container \"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98\": container with ID starting with 6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98 not found: ID does not exist" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.063571 4793 scope.go:117] "RemoveContainer" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" Jan 26 22:43:31 crc kubenswrapper[4793]: E0126 22:43:31.064118 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88\": container with ID starting with dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88 not found: ID does not exist" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.064269 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88"} err="failed to get container status \"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88\": rpc error: code = NotFound desc = could not find container \"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88\": container with ID starting with dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88 not found: ID does not exist" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.064387 4793 scope.go:117] "RemoveContainer" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" Jan 26 22:43:31 crc kubenswrapper[4793]: E0126 22:43:31.064822 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437\": container with ID starting with 202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437 not found: ID does not exist" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.064940 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437"} err="failed to get container status \"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437\": rpc error: code = NotFound desc = could not find container \"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437\": container with ID starting with 202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437 not found: ID does not exist" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.110750 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.110870 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.110992 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.334842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.340584 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.769992 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" path="/var/lib/kubelet/pods/99c0251d-b287-4c00-a392-c43b8164e73d/volumes" Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.346357 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.347433 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" containerID="cri-o://14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" gracePeriod=30 Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.462559 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.463157 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" containerID="cri-o://b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" gracePeriod=30 Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.928317 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.931998 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082322 4793 generic.go:334] "Generic (PLEG): container finished" podID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" exitCode=0 Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082387 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerDied","Data":"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082416 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082442 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerDied","Data":"653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082467 4793 scope.go:117] "RemoveContainer" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082948 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082997 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.083045 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.083074 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.083109 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084445 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084152 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084859 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084946 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca" (OuterVolumeSpecName: "client-ca") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.085002 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config" (OuterVolumeSpecName: "config") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.085464 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config" (OuterVolumeSpecName: "config") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087513 4793 generic.go:334] "Generic (PLEG): container finished" podID="1919941b-c99f-483f-957d-eac3d9c10d76" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" exitCode=0 Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerDied","Data":"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerDied","Data":"0b072c7b5c8d4bf3ec078448030b74f78bd05e8468581b61acdb9d6e7206815a"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087614 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.089723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.089790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s" (OuterVolumeSpecName: "kube-api-access-tjd8s") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "kube-api-access-tjd8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.089849 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795" (OuterVolumeSpecName: "kube-api-access-lp795") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "kube-api-access-lp795". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.092130 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.114936 4793 scope.go:117] "RemoveContainer" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.115770 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30\": container with ID starting with b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30 not found: ID does not exist" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.116058 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30"} err="failed to get container status \"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30\": rpc error: code = NotFound desc = could not find container \"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30\": container with ID starting with b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30 not found: ID does not exist" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.116106 4793 scope.go:117] "RemoveContainer" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.130539 4793 scope.go:117] "RemoveContainer" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.131165 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba\": container with ID starting with 14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba not found: ID does not exist" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.131234 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba"} err="failed to get container status \"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba\": rpc error: code = NotFound desc = could not find container \"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba\": container with ID starting with 14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba not found: ID does not exist" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185342 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185382 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185395 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185407 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185422 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185434 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185445 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185456 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.412526 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.416077 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.426490 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.430449 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.487997 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn"] Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488352 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488375 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488397 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488406 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488418 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488426 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488439 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488446 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488457 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488467 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488482 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488491 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488502 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488510 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488528 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488535 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488650 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488667 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488676 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488688 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.489222 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.497559 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.497930 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498205 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498427 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498791 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498793 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.502843 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.504623 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.691856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-serving-cert\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.691968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-proxy-ca-bundles\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.692004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-config\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.692037 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-client-ca\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.692170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfsxh\" (UniqueName: \"kubernetes.io/projected/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-kube-api-access-jfsxh\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.767457 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" path="/var/lib/kubelet/pods/1919941b-c99f-483f-957d-eac3d9c10d76/volumes" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.768022 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" path="/var/lib/kubelet/pods/fc577754-9b71-4135-b5ca-3b2255b30af0/volumes" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793352 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-serving-cert\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-proxy-ca-bundles\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793867 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-config\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793906 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-client-ca\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793958 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfsxh\" (UniqueName: \"kubernetes.io/projected/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-kube-api-access-jfsxh\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.795309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-client-ca\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.795527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-proxy-ca-bundles\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.796426 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-config\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.797914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-serving-cert\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.810866 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfsxh\" (UniqueName: \"kubernetes.io/projected/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-kube-api-access-jfsxh\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.109404 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.489799 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m"] Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.491121 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.494032 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.494391 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.494919 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.495154 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.495370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.504576 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m"] Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.506528 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.564142 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn"] Jan 26 22:43:44 crc kubenswrapper[4793]: W0126 22:43:44.571035 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe1ed27d_d22f_4a4c_8612_c6ac5dc8a978.slice/crio-220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c WatchSource:0}: Error finding container 220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c: Status 404 returned error can't find the container with id 220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653567 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8b4977-7970-4687-a34b-d0c385d5672e-serving-cert\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653655 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-config\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653692 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmdc\" (UniqueName: \"kubernetes.io/projected/7d8b4977-7970-4687-a34b-d0c385d5672e-kube-api-access-ghmdc\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-client-ca\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.754789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-config\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.755583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmdc\" (UniqueName: \"kubernetes.io/projected/7d8b4977-7970-4687-a34b-d0c385d5672e-kube-api-access-ghmdc\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.755709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-client-ca\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.756572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-config\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.756747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-client-ca\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.757013 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8b4977-7970-4687-a34b-d0c385d5672e-serving-cert\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.764509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8b4977-7970-4687-a34b-d0c385d5672e-serving-cert\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.775768 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmdc\" (UniqueName: \"kubernetes.io/projected/7d8b4977-7970-4687-a34b-d0c385d5672e-kube-api-access-ghmdc\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.817539 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.112337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" event={"ID":"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978","Type":"ContainerStarted","Data":"b4ddfc357b8d576565172d4c070250813cde188dc3e0497fc2a13cea2d107687"} Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.112379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" event={"ID":"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978","Type":"ContainerStarted","Data":"220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c"} Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.113535 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.118750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.162179 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" podStartSLOduration=3.162155868 podStartE2EDuration="3.162155868s" podCreationTimestamp="2026-01-26 22:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:45.142395075 +0000 UTC m=+240.131166587" watchObservedRunningTime="2026-01-26 22:43:45.162155868 +0000 UTC m=+240.150927380" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.246909 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m"] Jan 26 22:43:45 crc kubenswrapper[4793]: W0126 22:43:45.253948 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d8b4977_7970_4687_a34b_d0c385d5672e.slice/crio-f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4 WatchSource:0}: Error finding container f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4: Status 404 returned error can't find the container with id f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4 Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.119710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" event={"ID":"7d8b4977-7970-4687-a34b-d0c385d5672e","Type":"ContainerStarted","Data":"d5d5c3baf2922669eca8436048c05ad30016076dcea10b963d44d4503d39cfd5"} Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.120212 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" event={"ID":"7d8b4977-7970-4687-a34b-d0c385d5672e","Type":"ContainerStarted","Data":"f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4"} Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.120254 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.126564 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.160583 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" podStartSLOduration=4.160564981 podStartE2EDuration="4.160564981s" podCreationTimestamp="2026-01-26 22:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:46.1400046 +0000 UTC m=+241.128776112" watchObservedRunningTime="2026-01-26 22:43:46.160564981 +0000 UTC m=+241.149336493" Jan 26 22:43:50 crc kubenswrapper[4793]: I0126 22:43:50.630219 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" containerID="cri-o://fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe" gracePeriod=15 Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.159964 4793 generic.go:334] "Generic (PLEG): container finished" podID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerID="fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe" exitCode=0 Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.160065 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerDied","Data":"fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe"} Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.347107 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378518 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378832 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378980 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.379901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.379927 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380038 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.379014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380339 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380373 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380448 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380795 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380824 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380841 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380853 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.381081 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.386555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.387305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.390488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p" (OuterVolumeSpecName: "kube-api-access-kjx6p") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "kube-api-access-kjx6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.397379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.397989 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.398268 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.398343 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.398583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.400539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.481969 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482005 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482014 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482023 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482035 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482046 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482057 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482066 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482076 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482084 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.168777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerDied","Data":"52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c"} Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.168851 4793 scope.go:117] "RemoveContainer" containerID="fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe" Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.168868 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.213708 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.219977 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.772736 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" path="/var/lib/kubelet/pods/e17125aa-eb99-4bad-a99d-44b86be4f09d/volumes" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.010165 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.010961 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.010987 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.011183 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.011637 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.011772 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012081 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012296 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012403 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012478 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012017 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012775 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013118 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013152 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013170 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013185 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013232 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013246 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013265 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013277 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013340 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013354 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013378 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013390 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013420 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013432 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013605 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013627 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013641 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013658 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013677 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013694 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.019217 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027578 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027930 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.062775 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.132985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133272 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133381 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133418 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133654 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133656 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.190583 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.192086 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.192877 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" exitCode=2 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.353399 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.375724 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e694222bd365f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,LastTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.769302 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e694222bd365f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,LastTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.200806 4793 generic.go:334] "Generic (PLEG): container finished" podID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerID="ae86aae5bad8ac03faa82ea1252a93b8958791a84a7892d11dbd7cc6c74b423c" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.200961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerDied","Data":"ae86aae5bad8ac03faa82ea1252a93b8958791a84a7892d11dbd7cc6c74b423c"} Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.202516 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.203109 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.203466 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806"} Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.203507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"10aca80144b51045f9aedf38a88ada7facd81ee3b5ec9db74afc7ce3435d0128"} Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.204365 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.204867 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.205916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207382 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207904 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207923 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207930 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207959 4793 scope.go:117] "RemoveContainer" containerID="2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.226050 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.639377 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.641120 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.641628 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670380 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670524 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock" (OuterVolumeSpecName: "var-lock") pod "dbe90ef5-9518-4ec1-b9b6-479bf9732ded" (UID: "dbe90ef5-9518-4ec1-b9b6-479bf9732ded"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670683 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670710 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dbe90ef5-9518-4ec1-b9b6-479bf9732ded" (UID: "dbe90ef5-9518-4ec1-b9b6-479bf9732ded"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670901 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670921 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.676393 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dbe90ef5-9518-4ec1-b9b6-479bf9732ded" (UID: "dbe90ef5-9518-4ec1-b9b6-479bf9732ded"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.772344 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.148362 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.149578 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.150585 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.151183 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.151840 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175818 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175855 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175913 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175940 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176063 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176500 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176519 4793 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176529 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.236923 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.238327 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" exitCode=0 Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.238437 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.239475 4793 scope.go:117] "RemoveContainer" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.241619 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerDied","Data":"86d64a9898e0bc215d83fa5f31fbdb9e7c7bf15338d21b31a5a1c977a3cf778e"} Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.241650 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d64a9898e0bc215d83fa5f31fbdb9e7c7bf15338d21b31a5a1c977a3cf778e" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.241942 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.255194 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.255520 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.255748 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.263108 4793 scope.go:117] "RemoveContainer" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.270009 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.270494 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.270794 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.280309 4793 scope.go:117] "RemoveContainer" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.298724 4793 scope.go:117] "RemoveContainer" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.316304 4793 scope.go:117] "RemoveContainer" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.341616 4793 scope.go:117] "RemoveContainer" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.370278 4793 scope.go:117] "RemoveContainer" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.370987 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\": container with ID starting with 9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d not found: ID does not exist" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371076 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d"} err="failed to get container status \"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\": rpc error: code = NotFound desc = could not find container \"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\": container with ID starting with 9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371147 4793 scope.go:117] "RemoveContainer" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.371572 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\": container with ID starting with a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b not found: ID does not exist" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371607 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b"} err="failed to get container status \"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\": rpc error: code = NotFound desc = could not find container \"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\": container with ID starting with a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371658 4793 scope.go:117] "RemoveContainer" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.372208 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\": container with ID starting with 892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088 not found: ID does not exist" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372281 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088"} err="failed to get container status \"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\": rpc error: code = NotFound desc = could not find container \"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\": container with ID starting with 892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088 not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372310 4793 scope.go:117] "RemoveContainer" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.372632 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\": container with ID starting with 9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2 not found: ID does not exist" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372664 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2"} err="failed to get container status \"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\": rpc error: code = NotFound desc = could not find container \"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\": container with ID starting with 9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2 not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372683 4793 scope.go:117] "RemoveContainer" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.372926 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\": container with ID starting with 516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955 not found: ID does not exist" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.373456 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955"} err="failed to get container status \"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\": rpc error: code = NotFound desc = could not find container \"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\": container with ID starting with 516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955 not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.373482 4793 scope.go:117] "RemoveContainer" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.373777 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\": container with ID starting with afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb not found: ID does not exist" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.373844 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb"} err="failed to get container status \"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\": rpc error: code = NotFound desc = could not find container \"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\": container with ID starting with afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.771971 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 22:44:01 crc kubenswrapper[4793]: E0126 22:44:01.803662 4793 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" volumeName="registry-storage" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.634921 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.635729 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636016 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636290 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636533 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: I0126 22:44:05.636567 4793 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636823 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 26 22:44:05 crc kubenswrapper[4793]: I0126 22:44:05.764696 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: I0126 22:44:05.764971 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.838432 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 26 22:44:06 crc kubenswrapper[4793]: E0126 22:44:06.240002 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 26 22:44:06 crc kubenswrapper[4793]: E0126 22:44:06.771077 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e694222bd365f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,LastTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:44:07 crc kubenswrapper[4793]: E0126 22:44:07.041021 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 26 22:44:08 crc kubenswrapper[4793]: E0126 22:44:08.641773 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.760798 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.761818 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.762409 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.776115 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.776176 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:08 crc kubenswrapper[4793]: E0126 22:44:08.776793 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.777656 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306264 4793 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1a36f4e00ea348ef3f075af47a961cbd3441e01399ebdb64a044398c53971652" exitCode=0 Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1a36f4e00ea348ef3f075af47a961cbd3441e01399ebdb64a044398c53971652"} Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d91fec7ba1c2e67d7feda711d0511808d8d6e717319e7d8be72eb49e1a117b69"} Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306805 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306826 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.307363 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: E0126 22:44:09.307392 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.307806 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311331 4793 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3" exitCode=1 Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3"} Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311632 4793 scope.go:117] "RemoveContainer" containerID="e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.312230 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.312766 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.313227 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.382615 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:10 crc kubenswrapper[4793]: I0126 22:44:10.323174 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 22:44:10 crc kubenswrapper[4793]: I0126 22:44:10.323404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b6437b716cf52361dcf2bc1f013b54e24cd4d66c149476d7391214aafbc20ea"} Jan 26 22:44:10 crc kubenswrapper[4793]: I0126 22:44:10.331082 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f33468285fa6093adfd00418dbeaf7bbe1e86a6ea7d2251e2026241bcefbfbdd"} Jan 26 22:44:11 crc kubenswrapper[4793]: I0126 22:44:11.338412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2d59cda4e5550c2b60a74f78a216e8147d7f962b0d8028a615f3e0fa1ed0c1d"} Jan 26 22:44:11 crc kubenswrapper[4793]: I0126 22:44:11.338810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"567d460440a2c250abe366cd0efe6c27a9e00d8d58f03ac6631e7d15ab391bf2"} Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0e5fe672a6324af5f20b2de3228332cd1d7e464ebb01e0507f988da14bf9c9d9"} Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a02effc4db3b7794d055eb97eaff2c5c6dbcc6614d35445866f0579feea3fdc8"} Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347632 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347661 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:13 crc kubenswrapper[4793]: I0126 22:44:13.777928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:13 crc kubenswrapper[4793]: I0126 22:44:13.778480 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:13 crc kubenswrapper[4793]: I0126 22:44:13.787743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:15 crc kubenswrapper[4793]: I0126 22:44:15.653901 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:15 crc kubenswrapper[4793]: I0126 22:44:15.658069 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:16 crc kubenswrapper[4793]: I0126 22:44:16.373364 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.358601 4793 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.383477 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.383692 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.383736 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.388053 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.390582 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aeea3e8c-33a1-40b4-9b77-f03da96a2f18" Jan 26 22:44:18 crc kubenswrapper[4793]: I0126 22:44:18.387402 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:18 crc kubenswrapper[4793]: I0126 22:44:18.388384 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:19 crc kubenswrapper[4793]: I0126 22:44:19.390004 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:25 crc kubenswrapper[4793]: I0126 22:44:25.783311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 22:44:25 crc kubenswrapper[4793]: I0126 22:44:25.794732 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aeea3e8c-33a1-40b4-9b77-f03da96a2f18" Jan 26 22:44:27 crc kubenswrapper[4793]: I0126 22:44:27.623925 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.009558 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.053308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.263994 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.335067 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.498814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.616986 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.713902 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.981639 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.983528 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.163103 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.377223 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.387882 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.391545 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.414304 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.419905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.588258 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.672176 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.730733 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.761629 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.782718 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.813720 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.978003 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.994443 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.027027 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.039734 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.047992 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.191406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.212378 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.220662 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.347536 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.658366 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.837162 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.916290 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.973800 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.126030 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.163435 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.172168 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.186905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.194783 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.199081 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.217001 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.432985 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.760328 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.838263 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.847645 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.005083 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.047841 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.063689 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.084410 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.112760 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.374316 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.466416 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.543749 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.622365 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.655742 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.716635 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.743966 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.867025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.868639 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.003029 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.006116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.015999 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.017555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.078693 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.113007 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.166787 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.195478 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.207335 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.251591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.304813 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.496291 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.516407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.545734 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.581819 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.615250 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.634751 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.689568 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.706925 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.730954 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.828738 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.868218 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.992089 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.024647 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.158824 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.243833 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.268756 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.309138 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.394711 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.464950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.486340 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.573689 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.577974 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.625349 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.656864 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.729891 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.785260 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.799846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.873024 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.884091 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.953951 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.095909 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.194264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.334720 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.349272 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.351337 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.469201 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.620244 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.739943 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.809861 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.852732 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.007996 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.066364 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.096634 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.115934 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.140052 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.143672 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.478639 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.637690 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.873302 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.889700 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.896608 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.003459 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.062061 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.161388 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.169718 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.237047 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.245591 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.280118 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.312083 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.337922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.408478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.414223 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.447669 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.475231 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.477580 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.509859 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.532656 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.571183 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.658135 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.715818 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.740040 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.838332 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.843212 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.928478 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.934654 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.934622299 podStartE2EDuration="41.934622299s" podCreationTimestamp="2026-01-26 22:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:44:16.279184696 +0000 UTC m=+271.267956258" watchObservedRunningTime="2026-01-26 22:44:37.934622299 +0000 UTC m=+292.923393841" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938003 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938080 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-84fc7bd96f-65hxg"] Jan 26 22:44:37 crc kubenswrapper[4793]: E0126 22:44:37.938534 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerName="installer" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938568 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerName="installer" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938764 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938799 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerName="installer" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938808 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.939639 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.946307 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.949343 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950261 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950693 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950723 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.951086 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.951763 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.952159 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.952597 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.953544 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.954761 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.960776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.968316 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.972024 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-audit-policies\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995678 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjzq\" (UniqueName: \"kubernetes.io/projected/24e166d9-621b-468f-baca-a49eeafbbb16-kube-api-access-hqjzq\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995738 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996062 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996100 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24e166d9-621b-468f-baca-a49eeafbbb16-audit-dir\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.009500 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.009444619 podStartE2EDuration="21.009444619s" podCreationTimestamp="2026-01-26 22:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:44:38.008285033 +0000 UTC m=+292.997056575" watchObservedRunningTime="2026-01-26 22:44:38.009444619 +0000 UTC m=+292.998216141" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.082553 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.090595 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097953 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24e166d9-621b-468f-baca-a49eeafbbb16-audit-dir\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098058 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-audit-policies\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjzq\" (UniqueName: \"kubernetes.io/projected/24e166d9-621b-468f-baca-a49eeafbbb16-kube-api-access-hqjzq\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098212 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24e166d9-621b-468f-baca-a49eeafbbb16-audit-dir\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098326 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100049 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100107 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-audit-policies\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100306 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.104991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.105015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106051 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106370 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.107421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106976 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.122179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjzq\" (UniqueName: \"kubernetes.io/projected/24e166d9-621b-468f-baca-a49eeafbbb16-kube-api-access-hqjzq\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.165952 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.179218 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.223030 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.230440 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.268963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.321862 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.332456 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.366347 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.385930 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.430544 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.516589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.646131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.704532 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.725619 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84fc7bd96f-65hxg"] Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.733154 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.828244 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.846406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.872402 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.959558 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.011551 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.012450 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806" gracePeriod=5 Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.081854 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.130429 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.165362 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.261567 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.345693 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.457675 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.459382 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.488366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.529953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" event={"ID":"24e166d9-621b-468f-baca-a49eeafbbb16","Type":"ContainerStarted","Data":"4b3eabc9dec90ebf29859930dee2e754c8fcc01183a47c58bb3aa26ee201a046"} Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.530010 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" event={"ID":"24e166d9-621b-468f-baca-a49eeafbbb16","Type":"ContainerStarted","Data":"9d8837ae67c6cf3ce04ecaec7fd6c79106e74bf7d645ee7e2c491c88961b5320"} Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.530439 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.591871 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.644866 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.676552 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.692454 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.706681 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.731334 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.733095 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" podStartSLOduration=74.733061118 podStartE2EDuration="1m14.733061118s" podCreationTimestamp="2026-01-26 22:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:44:39.574445263 +0000 UTC m=+294.563216775" watchObservedRunningTime="2026-01-26 22:44:39.733061118 +0000 UTC m=+294.721832660" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.757491 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.777317 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.778824 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.875457 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.010912 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.050168 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.114500 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.213843 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.223762 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.261643 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.267842 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.381527 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.381922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.404947 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.437922 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.568677 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.634103 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.659646 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.767490 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.794880 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.980902 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.987786 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.988230 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.994412 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.059969 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.271685 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.295043 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.379815 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.397925 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.413591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.440656 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.527987 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.630027 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.666737 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.690562 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.767920 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.014643 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.033140 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.283170 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.367245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.428366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.428711 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.448328 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.607001 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.660327 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.756070 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.763025 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.037960 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.397336 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.419869 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.606482 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.680068 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.712343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.051259 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.439572 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.576401 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.576491 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806" exitCode=137 Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.672672 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.672764 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.723846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.723991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724100 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724088 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724232 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724264 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724550 4793 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724572 4793 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724591 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724609 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.737457 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.809123 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.825910 4793 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.908555 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.524783 4793 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.587915 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.588080 4793 scope.go:117] "RemoveContainer" containerID="c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.588239 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.758130 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.774525 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.775077 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.793605 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.793662 4793 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="55b356c5-65be-4a87-9885-4e7c0be41ff8" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.800835 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.800913 4793 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="55b356c5-65be-4a87-9885-4e7c0be41ff8" Jan 26 22:44:46 crc kubenswrapper[4793]: I0126 22:44:46.506146 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.193898 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24"] Jan 26 22:45:00 crc kubenswrapper[4793]: E0126 22:45:00.194963 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.195003 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.195311 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.196015 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.201630 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.202777 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24"] Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.204488 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.268249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.268386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.268451 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.369620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.369691 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.369793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.371578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.380596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.402842 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.519452 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:00.998691 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24"] Jan 26 22:45:01 crc kubenswrapper[4793]: W0126 22:45:01.014057 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17076622_6d81_4499_9a65_dacfb8b78e44.slice/crio-b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b WatchSource:0}: Error finding container b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b: Status 404 returned error can't find the container with id b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:01.722261 4793 generic.go:334] "Generic (PLEG): container finished" podID="17076622-6d81-4499-9a65-dacfb8b78e44" containerID="c89a43e25baddf68cc1b158d6f9af509bbe62ff02e3f164e8b8fc64bef65b4f7" exitCode=0 Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:01.722369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" event={"ID":"17076622-6d81-4499-9a65-dacfb8b78e44","Type":"ContainerDied","Data":"c89a43e25baddf68cc1b158d6f9af509bbe62ff02e3f164e8b8fc64bef65b4f7"} Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:01.722690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" event={"ID":"17076622-6d81-4499-9a65-dacfb8b78e44","Type":"ContainerStarted","Data":"b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b"} Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.069114 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.211723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"17076622-6d81-4499-9a65-dacfb8b78e44\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.211893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"17076622-6d81-4499-9a65-dacfb8b78e44\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.212010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"17076622-6d81-4499-9a65-dacfb8b78e44\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.212813 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume" (OuterVolumeSpecName: "config-volume") pod "17076622-6d81-4499-9a65-dacfb8b78e44" (UID: "17076622-6d81-4499-9a65-dacfb8b78e44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.222256 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17076622-6d81-4499-9a65-dacfb8b78e44" (UID: "17076622-6d81-4499-9a65-dacfb8b78e44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.222318 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r" (OuterVolumeSpecName: "kube-api-access-5mv5r") pod "17076622-6d81-4499-9a65-dacfb8b78e44" (UID: "17076622-6d81-4499-9a65-dacfb8b78e44"). InnerVolumeSpecName "kube-api-access-5mv5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.313665 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.313718 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") on node \"crc\" DevicePath \"\"" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.313729 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.744599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" event={"ID":"17076622-6d81-4499-9a65-dacfb8b78e44","Type":"ContainerDied","Data":"b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b"} Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.744671 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.744802 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:48 crc kubenswrapper[4793]: I0126 22:45:48.323246 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:45:48 crc kubenswrapper[4793]: I0126 22:45:48.324044 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.253831 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gckjm"] Jan 26 22:46:07 crc kubenswrapper[4793]: E0126 22:46:07.256764 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17076622-6d81-4499-9a65-dacfb8b78e44" containerName="collect-profiles" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.256888 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="17076622-6d81-4499-9a65-dacfb8b78e44" containerName="collect-profiles" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.257111 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="17076622-6d81-4499-9a65-dacfb8b78e44" containerName="collect-profiles" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.257725 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.291670 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gckjm"] Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.420849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt9j\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-kube-api-access-wgt9j\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.420941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-certificates\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421406 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-trusted-ca\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-tls\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-bound-sa-token\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.444630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.522878 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.523377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt9j\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-kube-api-access-wgt9j\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.523607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-certificates\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.523871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-trusted-ca\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524385 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-tls\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524543 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524712 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-bound-sa-token\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.525262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-certificates\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.526347 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-trusted-ca\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.531455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-tls\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.532519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.551159 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-bound-sa-token\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.551250 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt9j\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-kube-api-access-wgt9j\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.579275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.871861 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gckjm"] Jan 26 22:46:08 crc kubenswrapper[4793]: I0126 22:46:08.179542 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" event={"ID":"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738","Type":"ContainerStarted","Data":"da673fbfbc0303c13128b2cd1ba31130c40bf93da90efde81b3877d9be5a326e"} Jan 26 22:46:08 crc kubenswrapper[4793]: I0126 22:46:08.179946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" event={"ID":"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738","Type":"ContainerStarted","Data":"c91bdf7380cab3494b9657f79348325f310e0b983c6dac1ad7ce7de92515f68c"} Jan 26 22:46:09 crc kubenswrapper[4793]: I0126 22:46:09.185785 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:09 crc kubenswrapper[4793]: I0126 22:46:09.208109 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" podStartSLOduration=2.208082331 podStartE2EDuration="2.208082331s" podCreationTimestamp="2026-01-26 22:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:46:09.204112577 +0000 UTC m=+384.192884089" watchObservedRunningTime="2026-01-26 22:46:09.208082331 +0000 UTC m=+384.196853873" Jan 26 22:46:18 crc kubenswrapper[4793]: I0126 22:46:18.322184 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:46:18 crc kubenswrapper[4793]: I0126 22:46:18.322887 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.656593 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.657433 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-78ml2" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" containerID="cri-o://bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.673309 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.674514 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rmxpm" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" containerID="cri-o://55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.686925 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.687167 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" containerID="cri-o://0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.700446 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.700743 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s7s8k" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" containerID="cri-o://b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.718350 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvxfj"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.719229 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.723174 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.723587 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" containerID="cri-o://a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.729057 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvxfj"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.836871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.836938 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.837505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpd6c\" (UniqueName: \"kubernetes.io/projected/768ab49f-bfcd-43e0-829f-226035ded4c8-kube-api-access-kpd6c\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.938548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpd6c\" (UniqueName: \"kubernetes.io/projected/768ab49f-bfcd-43e0-829f-226035ded4c8-kube-api-access-kpd6c\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.938665 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.938715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.945868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.949320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.954524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpd6c\" (UniqueName: \"kubernetes.io/projected/768ab49f-bfcd-43e0-829f-226035ded4c8-kube-api-access-kpd6c\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.040807 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.294308 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerID="bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.294406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.296229 4793 generic.go:334] "Generic (PLEG): container finished" podID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerID="0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.296288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerDied","Data":"0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.298134 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ece571b-df1f-4605-8127-b71fb41d2189" containerID="b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.298227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.300537 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerID="55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.300598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.303084 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerID="a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.303105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.446494 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvxfj"] Jan 26 22:46:22 crc kubenswrapper[4793]: W0126 22:46:22.457689 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod768ab49f_bfcd_43e0_829f_226035ded4c8.slice/crio-d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93 WatchSource:0}: Error finding container d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93: Status 404 returned error can't find the container with id d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.494612 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.549928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.549986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.550097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.551215 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities" (OuterVolumeSpecName: "utilities") pod "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" (UID: "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.562838 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c" (OuterVolumeSpecName: "kube-api-access-7tf6c") pod "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" (UID: "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125"). InnerVolumeSpecName "kube-api-access-7tf6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.607139 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.623931 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" (UID: "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.646161 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650621 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650659 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650772 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650964 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650981 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650994 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.651724 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities" (OuterVolumeSpecName: "utilities") pod "1ea0b73a-1820-4411-bbbf-acd3d22899e0" (UID: "1ea0b73a-1820-4411-bbbf-acd3d22899e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.652760 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities" (OuterVolumeSpecName: "utilities") pod "b0a722d2-056a-4bf2-a33c-719ee8aba7a8" (UID: "b0a722d2-056a-4bf2-a33c-719ee8aba7a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.664440 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk" (OuterVolumeSpecName: "kube-api-access-5lhnk") pod "1ea0b73a-1820-4411-bbbf-acd3d22899e0" (UID: "1ea0b73a-1820-4411-bbbf-acd3d22899e0"). InnerVolumeSpecName "kube-api-access-5lhnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.664471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5" (OuterVolumeSpecName: "kube-api-access-zm6z5") pod "b0a722d2-056a-4bf2-a33c-719ee8aba7a8" (UID: "b0a722d2-056a-4bf2-a33c-719ee8aba7a8"). InnerVolumeSpecName "kube-api-access-zm6z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.669086 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.669885 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.717789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ea0b73a-1820-4411-bbbf-acd3d22899e0" (UID: "1ea0b73a-1820-4411-bbbf-acd3d22899e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752582 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752640 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752653 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752670 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752686 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.813093 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0a722d2-056a-4bf2-a33c-719ee8aba7a8" (UID: "b0a722d2-056a-4bf2-a33c-719ee8aba7a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"f7779d32-d1d6-4e24-b59e-04461b1021c3\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"f7779d32-d1d6-4e24-b59e-04461b1021c3\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"2ece571b-df1f-4605-8127-b71fb41d2189\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"f7779d32-d1d6-4e24-b59e-04461b1021c3\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853701 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"2ece571b-df1f-4605-8127-b71fb41d2189\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853740 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"2ece571b-df1f-4605-8127-b71fb41d2189\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853959 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.854587 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f7779d32-d1d6-4e24-b59e-04461b1021c3" (UID: "f7779d32-d1d6-4e24-b59e-04461b1021c3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.854839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities" (OuterVolumeSpecName: "utilities") pod "2ece571b-df1f-4605-8127-b71fb41d2189" (UID: "2ece571b-df1f-4605-8127-b71fb41d2189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.858321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7" (OuterVolumeSpecName: "kube-api-access-rwhp7") pod "2ece571b-df1f-4605-8127-b71fb41d2189" (UID: "2ece571b-df1f-4605-8127-b71fb41d2189"). InnerVolumeSpecName "kube-api-access-rwhp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.858635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f7779d32-d1d6-4e24-b59e-04461b1021c3" (UID: "f7779d32-d1d6-4e24-b59e-04461b1021c3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.858785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k" (OuterVolumeSpecName: "kube-api-access-kqv7k") pod "f7779d32-d1d6-4e24-b59e-04461b1021c3" (UID: "f7779d32-d1d6-4e24-b59e-04461b1021c3"). InnerVolumeSpecName "kube-api-access-kqv7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.879763 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ece571b-df1f-4605-8127-b71fb41d2189" (UID: "2ece571b-df1f-4605-8127-b71fb41d2189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.954930 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955331 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955420 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955498 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955574 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955648 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.309834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"b6c39e29a72b59338ca03d2810962928be44b01ddeee62784d5e0ce96eb5572d"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.309913 4793 scope.go:117] "RemoveContainer" containerID="55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.309862 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.312250 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"ab28d055e5e6d116639cc20c8075ccee993ec8ea1d3f84c9e75e2020b4dbfa88"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.312269 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.315135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"de4523d60ec6d9e2d26e56021a3eb00ace652444b6905d5221950bbb9e65363d"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.315332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.317434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" event={"ID":"768ab49f-bfcd-43e0-829f-226035ded4c8","Type":"ContainerStarted","Data":"201f8e5f9df01303aefdf163e8149434446d4b73605da6880d1c578cf8971b3d"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.317683 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.317699 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" event={"ID":"768ab49f-bfcd-43e0-829f-226035ded4c8","Type":"ContainerStarted","Data":"d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.318996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerDied","Data":"7bb09463affe65b8c8341ff8758a1535e4d3942375b31382f2863df221ac542b"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.319067 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.321018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.321089 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.329536 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.338338 4793 scope.go:117] "RemoveContainer" containerID="f47964ab77003c1812c68563a6ab332512b80c93bca84785abf6954776142bb0" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.344627 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" podStartSLOduration=2.344603446 podStartE2EDuration="2.344603446s" podCreationTimestamp="2026-01-26 22:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:46:23.340329952 +0000 UTC m=+398.329101474" watchObservedRunningTime="2026-01-26 22:46:23.344603446 +0000 UTC m=+398.333374958" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.369930 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.375375 4793 scope.go:117] "RemoveContainer" containerID="c998dd75c98b69795ee5fb5dadc61e23eb1a709df4bc6c6589c9107c4d8626e1" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.375418 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.407476 4793 scope.go:117] "RemoveContainer" containerID="a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.412321 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.419608 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.430436 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.436962 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.439807 4793 scope.go:117] "RemoveContainer" containerID="259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.448013 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.450316 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.455648 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.456006 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.460562 4793 scope.go:117] "RemoveContainer" containerID="43bfb5939bd86706aa081141535e22be807fc9c0577dcd35f30a1721eb3de604" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.484300 4793 scope.go:117] "RemoveContainer" containerID="bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.501237 4793 scope.go:117] "RemoveContainer" containerID="a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.518485 4793 scope.go:117] "RemoveContainer" containerID="53ade5b98f6c11809a6f4eec9f04e9ed3dc37b0586623e2c12f5e703dca23ac1" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.530045 4793 scope.go:117] "RemoveContainer" containerID="0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.541285 4793 scope.go:117] "RemoveContainer" containerID="b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.561770 4793 scope.go:117] "RemoveContainer" containerID="9deb6cc715d13e34300108f710dc2b32d0af0395f968c159a45df8f3c1fa5d0c" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.574504 4793 scope.go:117] "RemoveContainer" containerID="df91a57438b106203c49f3ea641745b85b7989ad66fa5fea056e5bed77508967" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.767488 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" path="/var/lib/kubelet/pods/1ea0b73a-1820-4411-bbbf-acd3d22899e0/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.768404 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" path="/var/lib/kubelet/pods/2ece571b-df1f-4605-8127-b71fb41d2189/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.769241 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" path="/var/lib/kubelet/pods/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.770806 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" path="/var/lib/kubelet/pods/b0a722d2-056a-4bf2-a33c-719ee8aba7a8/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.771825 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" path="/var/lib/kubelet/pods/f7779d32-d1d6-4e24-b59e-04461b1021c3/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.890931 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dbctj"] Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892668 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892694 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892734 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892751 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892762 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892775 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892811 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892826 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892836 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892847 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892856 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892890 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892899 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892911 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892990 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893059 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893099 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893142 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893152 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893163 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893173 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893729 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893743 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893882 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893895 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.894841 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895245 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895258 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895272 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895321 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.900263 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbctj"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.900819 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.917009 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.069125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26c54\" (UniqueName: \"kubernetes.io/projected/84e4ec14-74a8-48d6-a90c-7cf20d345d74-kube-api-access-26c54\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.069221 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-utilities\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.069251 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-catalog-content\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.078438 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hr5wt"] Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.079711 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.084035 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.091055 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hr5wt"] Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.170493 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-utilities\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.170553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-catalog-content\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.170605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26c54\" (UniqueName: \"kubernetes.io/projected/84e4ec14-74a8-48d6-a90c-7cf20d345d74-kube-api-access-26c54\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.171714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-utilities\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.171713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-catalog-content\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.189971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26c54\" (UniqueName: \"kubernetes.io/projected/84e4ec14-74a8-48d6-a90c-7cf20d345d74-kube-api-access-26c54\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.226503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.271973 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-catalog-content\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.272029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-utilities\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.272076 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7gf\" (UniqueName: \"kubernetes.io/projected/9704070a-bf0b-4232-8e47-933e5f2ed889-kube-api-access-6r7gf\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.374333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-utilities\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.374452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7gf\" (UniqueName: \"kubernetes.io/projected/9704070a-bf0b-4232-8e47-933e5f2ed889-kube-api-access-6r7gf\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.374526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-catalog-content\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.375095 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-utilities\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.375114 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-catalog-content\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.396626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7gf\" (UniqueName: \"kubernetes.io/projected/9704070a-bf0b-4232-8e47-933e5f2ed889-kube-api-access-6r7gf\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.418942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.451118 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbctj"] Jan 26 22:46:24 crc kubenswrapper[4793]: W0126 22:46:24.457705 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e4ec14_74a8_48d6_a90c_7cf20d345d74.slice/crio-a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f WatchSource:0}: Error finding container a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f: Status 404 returned error can't find the container with id a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.626409 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hr5wt"] Jan 26 22:46:24 crc kubenswrapper[4793]: W0126 22:46:24.632490 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9704070a_bf0b_4232_8e47_933e5f2ed889.slice/crio-498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e WatchSource:0}: Error finding container 498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e: Status 404 returned error can't find the container with id 498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.353045 4793 generic.go:334] "Generic (PLEG): container finished" podID="9704070a-bf0b-4232-8e47-933e5f2ed889" containerID="aebb2d02c3863da955a1ff40c1abe3cd4babf3e77ec1747eaaccfab32e015400" exitCode=0 Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.353163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerDied","Data":"aebb2d02c3863da955a1ff40c1abe3cd4babf3e77ec1747eaaccfab32e015400"} Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.353272 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerStarted","Data":"498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e"} Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.355170 4793 generic.go:334] "Generic (PLEG): container finished" podID="84e4ec14-74a8-48d6-a90c-7cf20d345d74" containerID="a1bc8a72fd3d03823afbc16877e4c2346324918eeaa8a4b9bd9dbc3951bec425" exitCode=0 Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.355369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerDied","Data":"a1bc8a72fd3d03823afbc16877e4c2346324918eeaa8a4b9bd9dbc3951bec425"} Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.355850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerStarted","Data":"a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f"} Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.284784 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mvm2"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.285918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.288502 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.295284 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mvm2"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.416203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-utilities\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.416330 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4cv\" (UniqueName: \"kubernetes.io/projected/1676c9d7-42b7-4566-802d-8a3caeef1380-kube-api-access-wk4cv\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.417797 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-catalog-content\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.489647 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snkfp"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.493298 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.495737 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.503573 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snkfp"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.519137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4cv\" (UniqueName: \"kubernetes.io/projected/1676c9d7-42b7-4566-802d-8a3caeef1380-kube-api-access-wk4cv\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.519224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-catalog-content\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.519713 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-utilities\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.520163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-utilities\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.520559 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-catalog-content\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.545151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4cv\" (UniqueName: \"kubernetes.io/projected/1676c9d7-42b7-4566-802d-8a3caeef1380-kube-api-access-wk4cv\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.611551 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.621123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-catalog-content\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.621309 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-utilities\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.621446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m4vn\" (UniqueName: \"kubernetes.io/projected/af624fd1-30e1-4ee0-8253-89318c76eb99-kube-api-access-9m4vn\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.722340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-catalog-content\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.722801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-utilities\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.722845 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m4vn\" (UniqueName: \"kubernetes.io/projected/af624fd1-30e1-4ee0-8253-89318c76eb99-kube-api-access-9m4vn\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.723486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-catalog-content\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.724160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-utilities\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.752156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m4vn\" (UniqueName: \"kubernetes.io/projected/af624fd1-30e1-4ee0-8253-89318c76eb99-kube-api-access-9m4vn\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.820850 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.022376 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snkfp"] Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.028013 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mvm2"] Jan 26 22:46:27 crc kubenswrapper[4793]: W0126 22:46:27.030344 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1676c9d7_42b7_4566_802d_8a3caeef1380.slice/crio-b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459 WatchSource:0}: Error finding container b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459: Status 404 returned error can't find the container with id b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.373269 4793 generic.go:334] "Generic (PLEG): container finished" podID="1676c9d7-42b7-4566-802d-8a3caeef1380" containerID="758c0b434497349273b97b094196de1f2780899ddba9c8a02322bbac7693f7ea" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.373480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerDied","Data":"758c0b434497349273b97b094196de1f2780899ddba9c8a02322bbac7693f7ea"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.373819 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerStarted","Data":"b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.380052 4793 generic.go:334] "Generic (PLEG): container finished" podID="9704070a-bf0b-4232-8e47-933e5f2ed889" containerID="c3fbb5d754d3dccce8ddf5ddc5c725b7921b08aec37862a557213727ee6c0374" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.380095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerDied","Data":"c3fbb5d754d3dccce8ddf5ddc5c725b7921b08aec37862a557213727ee6c0374"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.382395 4793 generic.go:334] "Generic (PLEG): container finished" podID="af624fd1-30e1-4ee0-8253-89318c76eb99" containerID="6a489804df28de6deeeb5e08a555849320520ffab8da1bb5cc8535d2fede4486" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.382490 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerDied","Data":"6a489804df28de6deeeb5e08a555849320520ffab8da1bb5cc8535d2fede4486"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.382522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerStarted","Data":"0ece0ec0232026ee340f7d742396c61b11fdbac431b6322500a3333dbe6ea1ce"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.386114 4793 generic.go:334] "Generic (PLEG): container finished" podID="84e4ec14-74a8-48d6-a90c-7cf20d345d74" containerID="90a8d45ac7339a2ea2e2d014322638a68c4c742c8c0f8e7218e9114dd99db6f1" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.386182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerDied","Data":"90a8d45ac7339a2ea2e2d014322638a68c4c742c8c0f8e7218e9114dd99db6f1"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.587980 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.635248 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.396756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerStarted","Data":"ee6617132f505d1e7567cfdf38d75cbef6294312e494f25d43941f90c393a028"} Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.405007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerStarted","Data":"163c2ab0e0191216d8f02d1459fc6fbf86be3836e60e692d215d828539256663"} Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.410821 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerStarted","Data":"42d8b739a79864a9d0d175be22082b9d531fa264f9867b85eba48530e8d19ee7"} Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.420466 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hr5wt" podStartSLOduration=1.901678574 podStartE2EDuration="4.420447064s" podCreationTimestamp="2026-01-26 22:46:24 +0000 UTC" firstStartedPulling="2026-01-26 22:46:25.356868183 +0000 UTC m=+400.345639695" lastFinishedPulling="2026-01-26 22:46:27.875636673 +0000 UTC m=+402.864408185" observedRunningTime="2026-01-26 22:46:28.418634352 +0000 UTC m=+403.407405864" watchObservedRunningTime="2026-01-26 22:46:28.420447064 +0000 UTC m=+403.409218586" Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.453668 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dbctj" podStartSLOduration=2.813344661 podStartE2EDuration="5.453637833s" podCreationTimestamp="2026-01-26 22:46:23 +0000 UTC" firstStartedPulling="2026-01-26 22:46:25.357956174 +0000 UTC m=+400.346727686" lastFinishedPulling="2026-01-26 22:46:27.998249346 +0000 UTC m=+402.987020858" observedRunningTime="2026-01-26 22:46:28.433506572 +0000 UTC m=+403.422278084" watchObservedRunningTime="2026-01-26 22:46:28.453637833 +0000 UTC m=+403.442409345" Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.419227 4793 generic.go:334] "Generic (PLEG): container finished" podID="af624fd1-30e1-4ee0-8253-89318c76eb99" containerID="94571a91784d48e98084ef42017ea96b14d8bc6aa4045ace890a13e0a86e7ea3" exitCode=0 Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.419315 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerDied","Data":"94571a91784d48e98084ef42017ea96b14d8bc6aa4045ace890a13e0a86e7ea3"} Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.421467 4793 generic.go:334] "Generic (PLEG): container finished" podID="1676c9d7-42b7-4566-802d-8a3caeef1380" containerID="42d8b739a79864a9d0d175be22082b9d531fa264f9867b85eba48530e8d19ee7" exitCode=0 Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.421617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerDied","Data":"42d8b739a79864a9d0d175be22082b9d531fa264f9867b85eba48530e8d19ee7"} Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.436041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerStarted","Data":"3cbd815e9b3436b771c9bd9537cb9fe0e9a4d751972eca6a8809a227f579795d"} Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.439001 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerStarted","Data":"c3da0430e2a945b19db7a4844a21ccb289ae8544897872eabf3e52935b6d9741"} Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.457310 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mvm2" podStartSLOduration=2.417306504 podStartE2EDuration="5.457289993s" podCreationTimestamp="2026-01-26 22:46:26 +0000 UTC" firstStartedPulling="2026-01-26 22:46:27.375728791 +0000 UTC m=+402.364500303" lastFinishedPulling="2026-01-26 22:46:30.41571228 +0000 UTC m=+405.404483792" observedRunningTime="2026-01-26 22:46:31.453322438 +0000 UTC m=+406.442093950" watchObservedRunningTime="2026-01-26 22:46:31.457289993 +0000 UTC m=+406.446061495" Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.473210 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snkfp" podStartSLOduration=2.206127173 podStartE2EDuration="5.473170702s" podCreationTimestamp="2026-01-26 22:46:26 +0000 UTC" firstStartedPulling="2026-01-26 22:46:27.383771284 +0000 UTC m=+402.372542796" lastFinishedPulling="2026-01-26 22:46:30.650814813 +0000 UTC m=+405.639586325" observedRunningTime="2026-01-26 22:46:31.473134341 +0000 UTC m=+406.461905863" watchObservedRunningTime="2026-01-26 22:46:31.473170702 +0000 UTC m=+406.461942214" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.226760 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.227488 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.271994 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.420536 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.420613 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.468907 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.513843 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.518283 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.612155 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.612645 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.653995 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.821684 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.821901 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.859509 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:37 crc kubenswrapper[4793]: I0126 22:46:37.524599 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:37 crc kubenswrapper[4793]: I0126 22:46:37.540878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.323018 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.325512 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.325791 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.327080 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.327440 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf" gracePeriod=600 Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.546264 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf" exitCode=0 Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.546338 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf"} Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.546473 4793 scope.go:117] "RemoveContainer" containerID="f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944" Jan 26 22:46:49 crc kubenswrapper[4793]: I0126 22:46:49.556431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe"} Jan 26 22:46:52 crc kubenswrapper[4793]: I0126 22:46:52.673075 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" containerID="cri-o://3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a" gracePeriod=30 Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.587744 4793 generic.go:334] "Generic (PLEG): container finished" podID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerID="3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a" exitCode=0 Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.587826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerDied","Data":"3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a"} Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.704095 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764325 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764912 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764979 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765046 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.772872 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.774360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.776415 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.781963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.786055 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw" (OuterVolumeSpecName: "kube-api-access-ldcbw") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "kube-api-access-ldcbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.786305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.786801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.803783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871788 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871836 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871850 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871858 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871867 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871877 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871923 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.598379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerDied","Data":"4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db"} Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.598466 4793 scope.go:117] "RemoveContainer" containerID="3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a" Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.598562 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.641014 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.645372 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:46:55 crc kubenswrapper[4793]: I0126 22:46:55.773578 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" path="/var/lib/kubelet/pods/ceed8696-7889-4e56-b430-dc4a6d46e1e6/volumes" Jan 26 22:49:18 crc kubenswrapper[4793]: I0126 22:49:18.322513 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:49:18 crc kubenswrapper[4793]: I0126 22:49:18.323358 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:49:46 crc kubenswrapper[4793]: I0126 22:49:46.053237 4793 scope.go:117] "RemoveContainer" containerID="4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99" Jan 26 22:49:48 crc kubenswrapper[4793]: I0126 22:49:48.323025 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:49:48 crc kubenswrapper[4793]: I0126 22:49:48.323151 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.322615 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.323663 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.323737 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.324628 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.324704 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe" gracePeriod=600 Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.110351 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe" exitCode=0 Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.110643 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe"} Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.111215 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37"} Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.111256 4793 scope.go:117] "RemoveContainer" containerID="7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf" Jan 26 22:52:18 crc kubenswrapper[4793]: I0126 22:52:18.322759 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:52:18 crc kubenswrapper[4793]: I0126 22:52:18.323581 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:52:30 crc kubenswrapper[4793]: I0126 22:52:30.498723 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.067347 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:52:41 crc kubenswrapper[4793]: E0126 22:52:41.070795 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.070862 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.071057 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.072483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.075276 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.136278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.136333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.136386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.274526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.405479 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.657720 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.127747 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ddad55e-ec81-4cca-9993-f266aceed079" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" exitCode=0 Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.127828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46"} Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.128318 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerStarted","Data":"d6f759f878c836ad8b69979ff57c86e197b44603edc5e72c519f0704218c6685"} Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.129999 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 22:52:43 crc kubenswrapper[4793]: I0126 22:52:43.137043 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerStarted","Data":"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69"} Jan 26 22:52:44 crc kubenswrapper[4793]: I0126 22:52:44.146540 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ddad55e-ec81-4cca-9993-f266aceed079" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" exitCode=0 Jan 26 22:52:44 crc kubenswrapper[4793]: I0126 22:52:44.146609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69"} Jan 26 22:52:45 crc kubenswrapper[4793]: I0126 22:52:45.166659 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerStarted","Data":"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694"} Jan 26 22:52:45 crc kubenswrapper[4793]: I0126 22:52:45.195173 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sgsmk" podStartSLOduration=1.7590943989999999 podStartE2EDuration="4.195149122s" podCreationTimestamp="2026-01-26 22:52:41 +0000 UTC" firstStartedPulling="2026-01-26 22:52:42.129728041 +0000 UTC m=+777.118499553" lastFinishedPulling="2026-01-26 22:52:44.565782754 +0000 UTC m=+779.554554276" observedRunningTime="2026-01-26 22:52:45.191612421 +0000 UTC m=+780.180383993" watchObservedRunningTime="2026-01-26 22:52:45.195149122 +0000 UTC m=+780.183920644" Jan 26 22:52:48 crc kubenswrapper[4793]: I0126 22:52:48.322735 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:52:48 crc kubenswrapper[4793]: I0126 22:52:48.323363 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:52:51 crc kubenswrapper[4793]: I0126 22:52:51.405810 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:51 crc kubenswrapper[4793]: I0126 22:52:51.407559 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:52 crc kubenswrapper[4793]: I0126 22:52:52.472156 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sgsmk" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" probeResult="failure" output=< Jan 26 22:52:52 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 26 22:52:52 crc kubenswrapper[4793]: > Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.498088 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8"] Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.499107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.501699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.514100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8"] Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.655964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.656026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.656097 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.757576 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.757752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.758020 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.758417 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.759042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.798810 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.817453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:54 crc kubenswrapper[4793]: I0126 22:52:54.131814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8"] Jan 26 22:52:54 crc kubenswrapper[4793]: I0126 22:52:54.228222 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerStarted","Data":"6079632c3586a032fad505fc58c9bdf543c6c0f9d0096be920c7b9066b0c0faf"} Jan 26 22:52:55 crc kubenswrapper[4793]: I0126 22:52:55.268074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerStarted","Data":"525077eadc69cbb034a38a46f77a91f42674d4880436ff53f1c19a084bb46a3e"} Jan 26 22:52:56 crc kubenswrapper[4793]: I0126 22:52:56.278049 4793 generic.go:334] "Generic (PLEG): container finished" podID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerID="525077eadc69cbb034a38a46f77a91f42674d4880436ff53f1c19a084bb46a3e" exitCode=0 Jan 26 22:52:56 crc kubenswrapper[4793]: I0126 22:52:56.278172 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"525077eadc69cbb034a38a46f77a91f42674d4880436ff53f1c19a084bb46a3e"} Jan 26 22:52:58 crc kubenswrapper[4793]: I0126 22:52:58.294250 4793 generic.go:334] "Generic (PLEG): container finished" podID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerID="0fcce5b41356fa44cb98dfad8e9a0f443746aa0a93940a91a8100514e59a8a51" exitCode=0 Jan 26 22:52:58 crc kubenswrapper[4793]: I0126 22:52:58.294321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"0fcce5b41356fa44cb98dfad8e9a0f443746aa0a93940a91a8100514e59a8a51"} Jan 26 22:52:59 crc kubenswrapper[4793]: I0126 22:52:59.306321 4793 generic.go:334] "Generic (PLEG): container finished" podID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerID="da0860d1b2c1f834cd5dbb6bd0281a5792b04c2becc3f164bef7044c14b718b5" exitCode=0 Jan 26 22:52:59 crc kubenswrapper[4793]: I0126 22:52:59.306408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"da0860d1b2c1f834cd5dbb6bd0281a5792b04c2becc3f164bef7044c14b718b5"} Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.626693 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.692256 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"e776cde1-cc4e-40ce-a7b4-eef264360331\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.692413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"e776cde1-cc4e-40ce-a7b4-eef264360331\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.692488 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"e776cde1-cc4e-40ce-a7b4-eef264360331\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.693014 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle" (OuterVolumeSpecName: "bundle") pod "e776cde1-cc4e-40ce-a7b4-eef264360331" (UID: "e776cde1-cc4e-40ce-a7b4-eef264360331"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.705164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs" (OuterVolumeSpecName: "kube-api-access-bgbrs") pod "e776cde1-cc4e-40ce-a7b4-eef264360331" (UID: "e776cde1-cc4e-40ce-a7b4-eef264360331"). InnerVolumeSpecName "kube-api-access-bgbrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.718218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util" (OuterVolumeSpecName: "util") pod "e776cde1-cc4e-40ce-a7b4-eef264360331" (UID: "e776cde1-cc4e-40ce-a7b4-eef264360331"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.793735 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.793777 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.793798 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.323470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"6079632c3586a032fad505fc58c9bdf543c6c0f9d0096be920c7b9066b0c0faf"} Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.323531 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6079632c3586a032fad505fc58c9bdf543c6c0f9d0096be920c7b9066b0c0faf" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.323555 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.471709 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.543427 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.721207 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722118 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" containerID="cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722550 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" containerID="cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722594 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" containerID="cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722635 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" containerID="cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722675 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722714 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" containerID="cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722755 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" containerID="cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.771944 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" containerID="cri-o://89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" gracePeriod=30 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.035803 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.039434 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-acl-logging/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.040284 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-controller/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.041205 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111160 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lj9ws"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111319 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111353 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111448 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111453 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111465 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111470 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111481 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111492 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111501 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111508 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111523 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111530 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111539 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111546 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111564 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="extract" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111573 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="extract" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111586 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111594 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111601 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kubecfg-setup" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111608 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kubecfg-setup" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111618 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111624 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111634 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111641 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111649 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111657 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111666 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="util" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111672 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="util" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111679 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111686 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111696 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111703 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111709 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="pull" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111716 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="pull" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111734 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111481 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111846 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111861 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111871 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111882 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111891 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111900 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111878 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111913 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111922 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111932 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111940 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111957 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112053 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="extract" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112096 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112125 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112307 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111527 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111546 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111563 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111592 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111607 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111903 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111959 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket" (OuterVolumeSpecName: "log-socket") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112808 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113352 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113693 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113719 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113739 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash" (OuterVolumeSpecName: "host-slash") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log" (OuterVolumeSpecName: "node-log") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.114234 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.116788 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.120466 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd" (OuterVolumeSpecName: "kube-api-access-97spd") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "kube-api-access-97spd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.136665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.213944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-bin\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214061 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-ovn\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214144 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-systemd-units\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-script-lib\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-env-overrides\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-kubelet\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-var-lib-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214616 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-slash\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgdq\" (UniqueName: \"kubernetes.io/projected/02e40a32-f700-4300-a55e-7ba957dd8ab8-kube-api-access-scgdq\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-netns\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-node-log\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215023 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-systemd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215062 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-config\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215127 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-log-socket\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215269 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovn-node-metrics-cert\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-netd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-etc-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215455 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215478 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215499 4793 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215518 4793 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215539 4793 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215557 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215575 4793 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215591 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215603 4793 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215618 4793 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215630 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215642 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215656 4793 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215670 4793 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215682 4793 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215697 4793 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215708 4793 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215721 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215736 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215751 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-netns\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317934 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-node-log\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-systemd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-systemd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-netns\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-node-log\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320377 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-config\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320531 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-config\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320865 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-log-socket\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321047 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-log-socket\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-netd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovn-node-metrics-cert\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-etc-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321373 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-bin\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321429 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-ovn\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-etc-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-netd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-bin\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321593 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-systemd-units\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-script-lib\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-ovn\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-systemd-units\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-env-overrides\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-kubelet\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322059 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-kubelet\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-var-lib-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322466 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-var-lib-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-slash\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322909 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scgdq\" (UniqueName: \"kubernetes.io/projected/02e40a32-f700-4300-a55e-7ba957dd8ab8-kube-api-access-scgdq\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-slash\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.323002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-env-overrides\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.323078 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-script-lib\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.328607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovn-node-metrics-cert\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.336378 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.346259 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-acl-logging/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347091 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-controller/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347733 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347773 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347791 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347806 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347820 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347833 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347848 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" exitCode=143 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347856 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347865 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347862 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" exitCode=143 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347992 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348285 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348316 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348334 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348351 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348367 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348378 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348389 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348401 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348411 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348449 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348466 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348479 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348497 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348509 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348521 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348531 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348542 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348553 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348564 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348601 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348613 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348625 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348636 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348650 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348661 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348673 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348685 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348016 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348698 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348811 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"972d2d9dd7c3ad94fb7b559afde32870a71392b37532ff0117112547580522c5"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348850 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348864 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348879 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348890 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348900 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348911 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348921 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348932 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348943 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348954 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.353051 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/2.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354007 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354080 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" containerID="99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382" exitCode=2 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerDied","Data":"99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354355 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.355236 4793 scope.go:117] "RemoveContainer" containerID="99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.358304 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.360543 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scgdq\" (UniqueName: \"kubernetes.io/projected/02e40a32-f700-4300-a55e-7ba957dd8ab8-kube-api-access-scgdq\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.397697 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.415893 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.423422 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.430330 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.460428 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.476134 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: W0126 22:53:02.483297 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e40a32_f700_4300_a55e_7ba957dd8ab8.slice/crio-2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455 WatchSource:0}: Error finding container 2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455: Status 404 returned error can't find the container with id 2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.494515 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.530710 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.555875 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.586020 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.609590 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.631928 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.649729 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.650253 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.650334 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.650385 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.651034 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651079 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651123 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.651554 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651628 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651673 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.652479 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.652525 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.652564 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.652957 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653042 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653090 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.653478 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653520 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653550 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.653852 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653895 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653918 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.654182 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654235 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654257 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.654611 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654653 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654675 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.654982 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655020 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655051 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655358 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655388 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655699 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655798 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656116 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656171 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656519 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656553 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656822 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656857 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657099 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657128 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657681 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657732 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658290 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658341 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658664 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658706 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658993 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659028 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659314 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659351 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659573 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659608 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659842 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659871 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660155 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660215 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660537 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660577 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660877 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660912 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661168 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661283 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661682 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661715 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662014 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662046 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662826 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663491 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663552 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663883 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663925 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664237 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664275 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664627 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664667 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664944 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664983 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.665259 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.665296 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666365 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666408 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666685 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666717 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.667003 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.667040 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.669733 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.669776 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.670272 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.365649 4793 generic.go:334] "Generic (PLEG): container finished" podID="02e40a32-f700-4300-a55e-7ba957dd8ab8" containerID="418f5003965d98f496ddfdedc3c00d6c4e91063468638c05a5c59146de39f241" exitCode=0 Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.365734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerDied","Data":"418f5003965d98f496ddfdedc3c00d6c4e91063468638c05a5c59146de39f241"} Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.365805 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455"} Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.371678 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/2.log" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.373220 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.373510 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sgsmk" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" containerID="cri-o://a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" gracePeriod=2 Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.373638 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"3b1c3a3e0eb7cd95b6210611bb06d0453b071c6b568c4172bd4e54bd6a201485"} Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.622005 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.744598 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"8ddad55e-ec81-4cca-9993-f266aceed079\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.744649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"8ddad55e-ec81-4cca-9993-f266aceed079\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.744698 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"8ddad55e-ec81-4cca-9993-f266aceed079\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.747475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities" (OuterVolumeSpecName: "utilities") pod "8ddad55e-ec81-4cca-9993-f266aceed079" (UID: "8ddad55e-ec81-4cca-9993-f266aceed079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.764217 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26" (OuterVolumeSpecName: "kube-api-access-kvx26") pod "8ddad55e-ec81-4cca-9993-f266aceed079" (UID: "8ddad55e-ec81-4cca-9993-f266aceed079"). InnerVolumeSpecName "kube-api-access-kvx26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.790748 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" path="/var/lib/kubelet/pods/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/volumes" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.851638 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.851682 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.877579 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ddad55e-ec81-4cca-9993-f266aceed079" (UID: "8ddad55e-ec81-4cca-9993-f266aceed079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.952657 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.071711 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zj8dj"] Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.071994 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-utilities" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072018 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-utilities" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.072040 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072051 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.072073 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-content" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072084 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-content" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072258 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072858 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.074932 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.075569 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.077013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-725pq" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.154360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn5l\" (UniqueName: \"kubernetes.io/projected/786f5045-7022-4802-920e-4fcf30bfe3d7-kube-api-access-7jn5l\") pod \"nmstate-operator-646758c888-zj8dj\" (UID: \"786f5045-7022-4802-920e-4fcf30bfe3d7\") " pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.256230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn5l\" (UniqueName: \"kubernetes.io/projected/786f5045-7022-4802-920e-4fcf30bfe3d7-kube-api-access-7jn5l\") pod \"nmstate-operator-646758c888-zj8dj\" (UID: \"786f5045-7022-4802-920e-4fcf30bfe3d7\") " pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.283561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn5l\" (UniqueName: \"kubernetes.io/projected/786f5045-7022-4802-920e-4fcf30bfe3d7-kube-api-access-7jn5l\") pod \"nmstate-operator-646758c888-zj8dj\" (UID: \"786f5045-7022-4802-920e-4fcf30bfe3d7\") " pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380168 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ddad55e-ec81-4cca-9993-f266aceed079" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" exitCode=0 Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380267 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"d6f759f878c836ad8b69979ff57c86e197b44603edc5e72c519f0704218c6685"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380387 4793 scope.go:117] "RemoveContainer" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"2653663f85338cb600ecb28ce7f0adc5ab5c74874b261fb2dc3904e4047be81b"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"cdee3dd5ed55562c6bff528f41de27d7572b146238c037b20bde1050515e27e2"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"f5099af2b66592de7872934ab3f5f2b78eb2b9bbcce2d4b714d9936d8fd2d32f"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"2bfe2a6dadd071071ee5d9867a018948df9ce831afdd622c5ddbe65ed470eb7d"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"3ab4022be96ec4162e61306cb10921d6989d7e0c652a20bc74151c9a76f9b361"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.391665 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.396489 4793 scope.go:117] "RemoveContainer" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.443032 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.445912 4793 scope.go:117] "RemoveContainer" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.455874 4793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.456504 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.456534 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.456589 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" podUID="786f5045-7022-4802-920e-4fcf30bfe3d7" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.459452 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.477645 4793 scope.go:117] "RemoveContainer" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.478254 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694\": container with ID starting with a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694 not found: ID does not exist" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478295 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694"} err="failed to get container status \"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694\": rpc error: code = NotFound desc = could not find container \"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694\": container with ID starting with a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694 not found: ID does not exist" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478321 4793 scope.go:117] "RemoveContainer" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.478598 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69\": container with ID starting with 3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69 not found: ID does not exist" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478618 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69"} err="failed to get container status \"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69\": rpc error: code = NotFound desc = could not find container \"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69\": container with ID starting with 3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69 not found: ID does not exist" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478630 4793 scope.go:117] "RemoveContainer" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.478872 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46\": container with ID starting with ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46 not found: ID does not exist" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478896 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46"} err="failed to get container status \"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46\": rpc error: code = NotFound desc = could not find container \"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46\": container with ID starting with ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46 not found: ID does not exist" Jan 26 22:53:05 crc kubenswrapper[4793]: I0126 22:53:05.410518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"cb80297b2ffb8aff654ac1a4db14467c9e252471cef20a15e27ed7ec85ef9d42"} Jan 26 22:53:05 crc kubenswrapper[4793]: I0126 22:53:05.770381 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" path="/var/lib/kubelet/pods/8ddad55e-ec81-4cca-9993-f266aceed079/volumes" Jan 26 22:53:07 crc kubenswrapper[4793]: I0126 22:53:07.435337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"d780717a5d3c137dcda219498b529539a78ed0d6f243c5eb7a94aff4cee75d4b"} Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.281927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zj8dj"] Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.282811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.283275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328579 4793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328671 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328695 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328745 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" podUID="786f5045-7022-4802-920e-4fcf30bfe3d7" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.459723 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"73356bdce4930285c79e4b33cde8bcde24fe506c3b8c0f8d21079bf96ba48536"} Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.460164 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.460389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.460447 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.497800 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.498320 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.502498 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" podStartSLOduration=8.502481751 podStartE2EDuration="8.502481751s" podCreationTimestamp="2026-01-26 22:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:53:10.499049744 +0000 UTC m=+805.487821296" watchObservedRunningTime="2026-01-26 22:53:10.502481751 +0000 UTC m=+805.491253263" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.323969 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.325500 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.325590 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.326745 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.326858 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37" gracePeriod=600 Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.526229 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37" exitCode=0 Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.526299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37"} Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.526683 4793 scope.go:117] "RemoveContainer" containerID="46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe" Jan 26 22:53:19 crc kubenswrapper[4793]: I0126 22:53:19.538995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c"} Jan 26 22:53:24 crc kubenswrapper[4793]: I0126 22:53:24.760499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:24 crc kubenswrapper[4793]: I0126 22:53:24.761731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:25 crc kubenswrapper[4793]: I0126 22:53:25.296147 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zj8dj"] Jan 26 22:53:25 crc kubenswrapper[4793]: I0126 22:53:25.590997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" event={"ID":"786f5045-7022-4802-920e-4fcf30bfe3d7","Type":"ContainerStarted","Data":"08fa8b4fcfee3759f23e389261de0104c3b691063642d88d8178fb8542826139"} Jan 26 22:53:28 crc kubenswrapper[4793]: I0126 22:53:28.611598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" event={"ID":"786f5045-7022-4802-920e-4fcf30bfe3d7","Type":"ContainerStarted","Data":"b01b0f63fa7df0d567c0f7b827982e3b8b0f116e6d8ef4d7c63f846350c4f02d"} Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.778311 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" podStartSLOduration=23.482489656 podStartE2EDuration="25.778279638s" podCreationTimestamp="2026-01-26 22:53:04 +0000 UTC" firstStartedPulling="2026-01-26 22:53:25.312411352 +0000 UTC m=+820.301182904" lastFinishedPulling="2026-01-26 22:53:27.608201374 +0000 UTC m=+822.596972886" observedRunningTime="2026-01-26 22:53:28.639688107 +0000 UTC m=+823.628459669" watchObservedRunningTime="2026-01-26 22:53:29.778279638 +0000 UTC m=+824.767051190" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.778600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kgq9g"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.780357 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.783121 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zlgg8" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.799012 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.799999 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.804654 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kgq9g"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.805069 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.822131 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66w4g\" (UniqueName: \"kubernetes.io/projected/0399eddc-c943-4f43-8b69-c5b73e14e5c4-kube-api-access-66w4g\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.822207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.822248 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv97\" (UniqueName: \"kubernetes.io/projected/4c9631a4-9e0d-4fef-868a-d705499ebac8-kube-api-access-nmv97\") pod \"nmstate-metrics-54757c584b-kgq9g\" (UID: \"4c9631a4-9e0d-4fef-868a-d705499ebac8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.840738 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2xzlk"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.841672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.844947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.923473 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-nmstate-lock\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.923771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv97\" (UniqueName: \"kubernetes.io/projected/4c9631a4-9e0d-4fef-868a-d705499ebac8-kube-api-access-nmv97\") pod \"nmstate-metrics-54757c584b-kgq9g\" (UID: \"4c9631a4-9e0d-4fef-868a-d705499ebac8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.923944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4q6\" (UniqueName: \"kubernetes.io/projected/c6de759f-205a-4bf1-ba92-843e9725d254-kube-api-access-jh4q6\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924047 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66w4g\" (UniqueName: \"kubernetes.io/projected/0399eddc-c943-4f43-8b69-c5b73e14e5c4-kube-api-access-66w4g\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924199 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-dbus-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-ovs-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: E0126 22:53:29.924525 4793 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 22:53:29 crc kubenswrapper[4793]: E0126 22:53:29.924603 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair podName:0399eddc-c943-4f43-8b69-c5b73e14e5c4 nodeName:}" failed. No retries permitted until 2026-01-26 22:53:30.424582359 +0000 UTC m=+825.413353881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-9q92v" (UID: "0399eddc-c943-4f43-8b69-c5b73e14e5c4") : secret "openshift-nmstate-webhook" not found Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.940441 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.941287 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.946512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-65dfd" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.947812 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.948020 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.957621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv97\" (UniqueName: \"kubernetes.io/projected/4c9631a4-9e0d-4fef-868a-d705499ebac8-kube-api-access-nmv97\") pod \"nmstate-metrics-54757c584b-kgq9g\" (UID: \"4c9631a4-9e0d-4fef-868a-d705499ebac8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.962401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66w4g\" (UniqueName: \"kubernetes.io/projected/0399eddc-c943-4f43-8b69-c5b73e14e5c4-kube-api-access-66w4g\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.964138 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g"] Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztplc\" (UniqueName: \"kubernetes.io/projected/a3a28baa-a522-4cc7-b853-037d96cf0590-kube-api-access-ztplc\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4q6\" (UniqueName: \"kubernetes.io/projected/c6de759f-205a-4bf1-ba92-843e9725d254-kube-api-access-jh4q6\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a3a28baa-a522-4cc7-b853-037d96cf0590-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-dbus-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-ovs-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-nmstate-lock\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-nmstate-lock\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025994 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-ovs-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.026079 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-dbus-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.043202 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4q6\" (UniqueName: \"kubernetes.io/projected/c6de759f-205a-4bf1-ba92-843e9725d254-kube-api-access-jh4q6\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.100997 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.126544 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztplc\" (UniqueName: \"kubernetes.io/projected/a3a28baa-a522-4cc7-b853-037d96cf0590-kube-api-access-ztplc\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.126707 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.126837 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a3a28baa-a522-4cc7-b853-037d96cf0590-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: E0126 22:53:30.126876 4793 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 22:53:30 crc kubenswrapper[4793]: E0126 22:53:30.127075 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert podName:a3a28baa-a522-4cc7-b853-037d96cf0590 nodeName:}" failed. No retries permitted until 2026-01-26 22:53:30.627053837 +0000 UTC m=+825.615825349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-b564g" (UID: "a3a28baa-a522-4cc7-b853-037d96cf0590") : secret "plugin-serving-cert" not found Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.127888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a3a28baa-a522-4cc7-b853-037d96cf0590-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.152577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztplc\" (UniqueName: \"kubernetes.io/projected/a3a28baa-a522-4cc7-b853-037d96cf0590-kube-api-access-ztplc\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.159834 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8c49b7cb8-rgrbq"] Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.160367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.160731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.166815 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c49b7cb8-rgrbq"] Jan 26 22:53:30 crc kubenswrapper[4793]: W0126 22:53:30.221777 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6de759f_205a_4bf1_ba92_843e9725d254.slice/crio-1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd WatchSource:0}: Error finding container 1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd: Status 404 returned error can't find the container with id 1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-trusted-ca-bundle\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228289 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qfk\" (UniqueName: \"kubernetes.io/projected/ce2c6343-3f86-4963-8d0e-d44613014ff0-kube-api-access-h2qfk\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228322 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-oauth-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228346 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-service-ca\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228420 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-oauth-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.329735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-oauth-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.329985 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330062 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-service-ca\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-oauth-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330147 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-trusted-ca-bundle\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qfk\" (UniqueName: \"kubernetes.io/projected/ce2c6343-3f86-4963-8d0e-d44613014ff0-kube-api-access-h2qfk\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.331062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.331062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-oauth-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.331603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-service-ca\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.332414 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-trusted-ca-bundle\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.336878 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.339263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-oauth-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.347313 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qfk\" (UniqueName: \"kubernetes.io/projected/ce2c6343-3f86-4963-8d0e-d44613014ff0-kube-api-access-h2qfk\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.419245 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kgq9g"] Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.431854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.435565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.536673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.627027 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xzlk" event={"ID":"c6de759f-205a-4bf1-ba92-843e9725d254","Type":"ContainerStarted","Data":"1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd"} Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.628624 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" event={"ID":"4c9631a4-9e0d-4fef-868a-d705499ebac8","Type":"ContainerStarted","Data":"1ee34a70dc04d39c20de1fa721623f5b12a6ee2d617ea422af2233365e4e22b3"} Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.640001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.645933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.722642 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.755123 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c49b7cb8-rgrbq"] Jan 26 22:53:30 crc kubenswrapper[4793]: W0126 22:53:30.760349 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce2c6343_3f86_4963_8d0e_d44613014ff0.slice/crio-801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb WatchSource:0}: Error finding container 801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb: Status 404 returned error can't find the container with id 801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.859594 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.999608 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v"] Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.099881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g"] Jan 26 22:53:31 crc kubenswrapper[4793]: W0126 22:53:31.103064 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3a28baa_a522_4cc7_b853_037d96cf0590.slice/crio-83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734 WatchSource:0}: Error finding container 83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734: Status 404 returned error can't find the container with id 83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734 Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.636181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" event={"ID":"a3a28baa-a522-4cc7-b853-037d96cf0590","Type":"ContainerStarted","Data":"83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.638279 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" event={"ID":"0399eddc-c943-4f43-8b69-c5b73e14e5c4","Type":"ContainerStarted","Data":"6c1bfead4c8e3be0c1f6e243507eb63e735eadf5a14d0db98c2b4a0ab829d4ec"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.640344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c49b7cb8-rgrbq" event={"ID":"ce2c6343-3f86-4963-8d0e-d44613014ff0","Type":"ContainerStarted","Data":"c56897b068e2623e40e14315069aec77b62ac40e412b277d68595ded8d4c6680"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.640408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c49b7cb8-rgrbq" event={"ID":"ce2c6343-3f86-4963-8d0e-d44613014ff0","Type":"ContainerStarted","Data":"801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.662696 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8c49b7cb8-rgrbq" podStartSLOduration=1.662652248 podStartE2EDuration="1.662652248s" podCreationTimestamp="2026-01-26 22:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:53:31.658014696 +0000 UTC m=+826.646786218" watchObservedRunningTime="2026-01-26 22:53:31.662652248 +0000 UTC m=+826.651423780" Jan 26 22:53:32 crc kubenswrapper[4793]: I0126 22:53:32.499405 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.655607 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" event={"ID":"4c9631a4-9e0d-4fef-868a-d705499ebac8","Type":"ContainerStarted","Data":"355a3af769cda5995239e241a9ccf77be72572a12a6b96ac4104b88f774c37a9"} Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.657213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" event={"ID":"0399eddc-c943-4f43-8b69-c5b73e14e5c4","Type":"ContainerStarted","Data":"39d51dc8ffc8e76fbf33b766020f76ecde4ed6970a8edf8cdf7b9c54a86b9f56"} Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.657311 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.658922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xzlk" event={"ID":"c6de759f-205a-4bf1-ba92-843e9725d254","Type":"ContainerStarted","Data":"3a24ce080732f68ab10c75ddc527e156bc7169071636f51527869f9c1718a28a"} Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.659256 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.672138 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" podStartSLOduration=2.840035883 podStartE2EDuration="4.672120176s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:31.01992064 +0000 UTC m=+826.008692152" lastFinishedPulling="2026-01-26 22:53:32.852004923 +0000 UTC m=+827.840776445" observedRunningTime="2026-01-26 22:53:33.671601042 +0000 UTC m=+828.660372554" watchObservedRunningTime="2026-01-26 22:53:33.672120176 +0000 UTC m=+828.660891688" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.689435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2xzlk" podStartSLOduration=2.067779321 podStartE2EDuration="4.689416658s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:30.225714373 +0000 UTC m=+825.214485885" lastFinishedPulling="2026-01-26 22:53:32.8473517 +0000 UTC m=+827.836123222" observedRunningTime="2026-01-26 22:53:33.687537385 +0000 UTC m=+828.676308917" watchObservedRunningTime="2026-01-26 22:53:33.689416658 +0000 UTC m=+828.678188170" Jan 26 22:53:34 crc kubenswrapper[4793]: I0126 22:53:34.669749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" event={"ID":"a3a28baa-a522-4cc7-b853-037d96cf0590","Type":"ContainerStarted","Data":"0617b8b9cb3e21cbfd79f8329c29febaffb4a9cfa345261c746f5bcc60070458"} Jan 26 22:53:34 crc kubenswrapper[4793]: I0126 22:53:34.731484 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" podStartSLOduration=2.954525349 podStartE2EDuration="5.731462883s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:31.105493754 +0000 UTC m=+826.094265266" lastFinishedPulling="2026-01-26 22:53:33.882431268 +0000 UTC m=+828.871202800" observedRunningTime="2026-01-26 22:53:34.726847252 +0000 UTC m=+829.715618794" watchObservedRunningTime="2026-01-26 22:53:34.731462883 +0000 UTC m=+829.720234405" Jan 26 22:53:36 crc kubenswrapper[4793]: I0126 22:53:36.688087 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" event={"ID":"4c9631a4-9e0d-4fef-868a-d705499ebac8","Type":"ContainerStarted","Data":"f09532994cfa9e22f4747e8676812dfd151778faa4d2fac4bb9709e2d2c9ce89"} Jan 26 22:53:36 crc kubenswrapper[4793]: I0126 22:53:36.720817 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" podStartSLOduration=2.441510899 podStartE2EDuration="7.720799477s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:30.42461473 +0000 UTC m=+825.413386242" lastFinishedPulling="2026-01-26 22:53:35.703903278 +0000 UTC m=+830.692674820" observedRunningTime="2026-01-26 22:53:36.716811934 +0000 UTC m=+831.705583446" watchObservedRunningTime="2026-01-26 22:53:36.720799477 +0000 UTC m=+831.709570989" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.189239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.538544 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.538631 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.546608 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.726544 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.782257 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:53:46 crc kubenswrapper[4793]: I0126 22:53:46.146339 4793 scope.go:117] "RemoveContainer" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" Jan 26 22:53:46 crc kubenswrapper[4793]: I0126 22:53:46.776472 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/2.log" Jan 26 22:53:50 crc kubenswrapper[4793]: I0126 22:53:50.735169 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.332135 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.335154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.347917 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.459616 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.459676 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.460408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.562979 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.563181 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.563272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.564659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.564796 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.593410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.673060 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.946039 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:53:57 crc kubenswrapper[4793]: I0126 22:53:57.866643 4793 generic.go:334] "Generic (PLEG): container finished" podID="a802d493-70e5-40e0-b991-7d58b6570abd" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" exitCode=0 Jan 26 22:53:57 crc kubenswrapper[4793]: I0126 22:53:57.866764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564"} Jan 26 22:53:57 crc kubenswrapper[4793]: I0126 22:53:57.867167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerStarted","Data":"c361b43e23ef627f6653c64ba1b39c20cc81cca76bd27edcc2b9c43119e89341"} Jan 26 22:53:58 crc kubenswrapper[4793]: I0126 22:53:58.875492 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerStarted","Data":"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63"} Jan 26 22:53:59 crc kubenswrapper[4793]: I0126 22:53:59.890015 4793 generic.go:334] "Generic (PLEG): container finished" podID="a802d493-70e5-40e0-b991-7d58b6570abd" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" exitCode=0 Jan 26 22:53:59 crc kubenswrapper[4793]: I0126 22:53:59.890117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63"} Jan 26 22:54:00 crc kubenswrapper[4793]: I0126 22:54:00.905033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerStarted","Data":"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c"} Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.318541 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlvrv" podStartSLOduration=6.881922981 podStartE2EDuration="9.318521415s" podCreationTimestamp="2026-01-26 22:53:56 +0000 UTC" firstStartedPulling="2026-01-26 22:53:57.868737249 +0000 UTC m=+852.857508801" lastFinishedPulling="2026-01-26 22:54:00.305335713 +0000 UTC m=+855.294107235" observedRunningTime="2026-01-26 22:54:00.958611512 +0000 UTC m=+855.947383054" watchObservedRunningTime="2026-01-26 22:54:05.318521415 +0000 UTC m=+860.307292927" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.323446 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs"] Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.324472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.333800 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.335111 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs"] Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.394176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.394293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.394358 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495910 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.513029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.644617 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.832891 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-57ccj" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" containerID="cri-o://aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" gracePeriod=15 Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.910904 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs"] Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.168726 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-57ccj_f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd/console/0.log" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.169106 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313582 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313623 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313657 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313828 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.314670 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.314697 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config" (OuterVolumeSpecName: "console-config") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.315078 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.315309 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca" (OuterVolumeSpecName: "service-ca") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.320286 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q" (OuterVolumeSpecName: "kube-api-access-mqq8q") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "kube-api-access-mqq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.320932 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.322141 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415281 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415326 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415340 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415352 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415366 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415381 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415391 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.673936 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.674027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.736899 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.950149 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerID="64d5cb9763014f7de2894fd7c4bbd529608ff547250631d88174ddee8e75766c" exitCode=0 Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.950249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"64d5cb9763014f7de2894fd7c4bbd529608ff547250631d88174ddee8e75766c"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.950332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerStarted","Data":"b33e7135b1ab732a93c47c2e2988ec9f52bae84e675d192f79df1c3e2d70873e"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.954521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-57ccj_f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd/console/0.log" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955099 4793 generic.go:334] "Generic (PLEG): container finished" podID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" exitCode=2 Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955250 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerDied","Data":"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerDied","Data":"fc7e45cc09b7e136cabcbaac23e466557070638bb704168bde4eed70e8068d9f"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955407 4793 scope.go:117] "RemoveContainer" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.988490 4793 scope.go:117] "RemoveContainer" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" Jan 26 22:54:06 crc kubenswrapper[4793]: E0126 22:54:06.989300 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa\": container with ID starting with aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa not found: ID does not exist" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.989375 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa"} err="failed to get container status \"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa\": rpc error: code = NotFound desc = could not find container \"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa\": container with ID starting with aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa not found: ID does not exist" Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.011928 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.020505 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.027702 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.774094 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" path="/var/lib/kubelet/pods/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd/volumes" Jan 26 22:54:08 crc kubenswrapper[4793]: I0126 22:54:08.984517 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerID="b9d70838e90a86618cb5150e3376b7facdb4180322aa778b9e6d6d68b22887e4" exitCode=0 Jan 26 22:54:08 crc kubenswrapper[4793]: I0126 22:54:08.985088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"b9d70838e90a86618cb5150e3376b7facdb4180322aa778b9e6d6d68b22887e4"} Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.484119 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.484622 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlvrv" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" containerID="cri-o://7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" gracePeriod=2 Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.912135 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.992946 4793 generic.go:334] "Generic (PLEG): container finished" podID="a802d493-70e5-40e0-b991-7d58b6570abd" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" exitCode=0 Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c"} Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993099 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"c361b43e23ef627f6653c64ba1b39c20cc81cca76bd27edcc2b9c43119e89341"} Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993105 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993126 4793 scope.go:117] "RemoveContainer" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.995989 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerID="cffb1ccca7d2b8c24cc905c579d315558094aaf65352d11411c082529bed8a6d" exitCode=0 Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.996053 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"cffb1ccca7d2b8c24cc905c579d315558094aaf65352d11411c082529bed8a6d"} Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.019310 4793 scope.go:117] "RemoveContainer" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.037962 4793 scope.go:117] "RemoveContainer" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.051697 4793 scope.go:117] "RemoveContainer" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" Jan 26 22:54:10 crc kubenswrapper[4793]: E0126 22:54:10.052755 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c\": container with ID starting with 7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c not found: ID does not exist" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.052835 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c"} err="failed to get container status \"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c\": rpc error: code = NotFound desc = could not find container \"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c\": container with ID starting with 7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c not found: ID does not exist" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.052900 4793 scope.go:117] "RemoveContainer" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" Jan 26 22:54:10 crc kubenswrapper[4793]: E0126 22:54:10.053363 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63\": container with ID starting with 8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63 not found: ID does not exist" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.053406 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63"} err="failed to get container status \"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63\": rpc error: code = NotFound desc = could not find container \"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63\": container with ID starting with 8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63 not found: ID does not exist" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.053427 4793 scope.go:117] "RemoveContainer" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" Jan 26 22:54:10 crc kubenswrapper[4793]: E0126 22:54:10.053999 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564\": container with ID starting with 9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564 not found: ID does not exist" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.054071 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564"} err="failed to get container status \"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564\": rpc error: code = NotFound desc = could not find container \"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564\": container with ID starting with 9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564 not found: ID does not exist" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.072669 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"a802d493-70e5-40e0-b991-7d58b6570abd\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.072965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"a802d493-70e5-40e0-b991-7d58b6570abd\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.073055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"a802d493-70e5-40e0-b991-7d58b6570abd\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.073989 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities" (OuterVolumeSpecName: "utilities") pod "a802d493-70e5-40e0-b991-7d58b6570abd" (UID: "a802d493-70e5-40e0-b991-7d58b6570abd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.079172 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk" (OuterVolumeSpecName: "kube-api-access-r7qbk") pod "a802d493-70e5-40e0-b991-7d58b6570abd" (UID: "a802d493-70e5-40e0-b991-7d58b6570abd"). InnerVolumeSpecName "kube-api-access-r7qbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.115365 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a802d493-70e5-40e0-b991-7d58b6570abd" (UID: "a802d493-70e5-40e0-b991-7d58b6570abd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.175273 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.175334 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.175357 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.349093 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.353774 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.251143 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.392274 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.392886 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.392946 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.393713 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle" (OuterVolumeSpecName: "bundle") pod "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" (UID: "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.399862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh" (OuterVolumeSpecName: "kube-api-access-qtbgh") pod "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" (UID: "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a"). InnerVolumeSpecName "kube-api-access-qtbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.494713 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.494763 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.619691 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util" (OuterVolumeSpecName: "util") pod "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" (UID: "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.697125 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.772037 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" path="/var/lib/kubelet/pods/a802d493-70e5-40e0-b991-7d58b6570abd/volumes" Jan 26 22:54:12 crc kubenswrapper[4793]: I0126 22:54:12.015470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"b33e7135b1ab732a93c47c2e2988ec9f52bae84e675d192f79df1c3e2d70873e"} Jan 26 22:54:12 crc kubenswrapper[4793]: I0126 22:54:12.015537 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b33e7135b1ab732a93c47c2e2988ec9f52bae84e675d192f79df1c3e2d70873e" Jan 26 22:54:12 crc kubenswrapper[4793]: I0126 22:54:12.015589 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.011351 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx"] Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012246 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012261 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012277 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-utilities" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012284 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-utilities" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012295 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="pull" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012301 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="pull" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012308 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="util" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012314 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="util" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012321 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-content" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012327 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-content" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012336 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="extract" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012342 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="extract" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012350 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012356 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012456 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012466 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012475 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="extract" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012863 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.016671 4793 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.016722 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.017216 4793 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-z6pgj": failed to list *v1.Secret: secrets "manager-account-dockercfg-z6pgj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.017267 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-z6pgj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"manager-account-dockercfg-z6pgj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.017210 4793 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "metallb-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.017274 4793 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.017329 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.017293 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.024846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.038780 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx"] Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.142606 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg"] Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.143386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.145066 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.145086 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.145937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vzpmx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.162473 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg"] Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.199366 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-apiservice-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.199421 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.199665 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-webhook-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-webhook-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-webhook-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301235 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-apiservice-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301253 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-apiservice-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.402142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-apiservice-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.402228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.402279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-webhook-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.409331 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-apiservice-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.409921 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-webhook-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.960089 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.101082 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.117896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-apiservice-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.118090 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-webhook-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.320557 4793 projected.go:288] Couldn't get configMap metallb-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.320623 4793 projected.go:194] Error preparing data for projected volume kube-api-access-hwcg4 for pod metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.320688 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4 podName:98b1ff78-307c-4f20-8d26-021aac793b08 nodeName:}" failed. No retries permitted until 2026-01-26 22:54:25.820664641 +0000 UTC m=+880.809436153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwcg4" (UniqueName: "kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4") pod "metallb-operator-controller-manager-6947468bd9-d2zfx" (UID: "98b1ff78-307c-4f20-8d26-021aac793b08") : failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.401341 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z6pgj" Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.420381 4793 projected.go:288] Couldn't get configMap metallb-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.420485 4793 projected.go:194] Error preparing data for projected volume kube-api-access-jzz58 for pod metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.420575 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58 podName:4c33d800-1c21-459c-b488-c4d43f46147a nodeName:}" failed. No retries permitted until 2026-01-26 22:54:25.920539762 +0000 UTC m=+880.909311274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzz58" (UniqueName: "kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58") pod "metallb-operator-webhook-server-576dcf7dd-dz2zg" (UID: "4c33d800-1c21-459c-b488-c4d43f46147a") : failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.596804 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.821436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.831679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.922948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.927862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.955085 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:26 crc kubenswrapper[4793]: I0126 22:54:26.127758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:26 crc kubenswrapper[4793]: I0126 22:54:26.206362 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg"] Jan 26 22:54:26 crc kubenswrapper[4793]: I0126 22:54:26.343004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx"] Jan 26 22:54:26 crc kubenswrapper[4793]: W0126 22:54:26.352587 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98b1ff78_307c_4f20_8d26_021aac793b08.slice/crio-23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b WatchSource:0}: Error finding container 23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b: Status 404 returned error can't find the container with id 23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b Jan 26 22:54:27 crc kubenswrapper[4793]: I0126 22:54:27.128081 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" event={"ID":"4c33d800-1c21-459c-b488-c4d43f46147a","Type":"ContainerStarted","Data":"c7a641f856d559af17a3246f0649213b4fd07a6172255821b6f3d13486bf0da4"} Jan 26 22:54:27 crc kubenswrapper[4793]: I0126 22:54:27.129115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" event={"ID":"98b1ff78-307c-4f20-8d26-021aac793b08","Type":"ContainerStarted","Data":"23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b"} Jan 26 22:54:31 crc kubenswrapper[4793]: I0126 22:54:31.162847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" event={"ID":"98b1ff78-307c-4f20-8d26-021aac793b08","Type":"ContainerStarted","Data":"a14d3d26c2c79e3b15e2049f747606d0aa871fb9cb4c3e1901b17cea9b88abb8"} Jan 26 22:54:31 crc kubenswrapper[4793]: I0126 22:54:31.163669 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:31 crc kubenswrapper[4793]: I0126 22:54:31.184277 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" podStartSLOduration=4.422327552 podStartE2EDuration="8.184259819s" podCreationTimestamp="2026-01-26 22:54:23 +0000 UTC" firstStartedPulling="2026-01-26 22:54:26.355577584 +0000 UTC m=+881.344349096" lastFinishedPulling="2026-01-26 22:54:30.117509841 +0000 UTC m=+885.106281363" observedRunningTime="2026-01-26 22:54:31.18045131 +0000 UTC m=+886.169222822" watchObservedRunningTime="2026-01-26 22:54:31.184259819 +0000 UTC m=+886.173031331" Jan 26 22:54:34 crc kubenswrapper[4793]: I0126 22:54:34.184064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" event={"ID":"4c33d800-1c21-459c-b488-c4d43f46147a","Type":"ContainerStarted","Data":"511add134364b50e2d89b5e328e41d302d115dbf33c09d14503e1db153c665c7"} Jan 26 22:54:34 crc kubenswrapper[4793]: I0126 22:54:34.184911 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:34 crc kubenswrapper[4793]: I0126 22:54:34.210405 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" podStartSLOduration=3.155309138 podStartE2EDuration="10.210378079s" podCreationTimestamp="2026-01-26 22:54:24 +0000 UTC" firstStartedPulling="2026-01-26 22:54:26.227466901 +0000 UTC m=+881.216238423" lastFinishedPulling="2026-01-26 22:54:33.282535852 +0000 UTC m=+888.271307364" observedRunningTime="2026-01-26 22:54:34.206752616 +0000 UTC m=+889.195524178" watchObservedRunningTime="2026-01-26 22:54:34.210378079 +0000 UTC m=+889.199149621" Jan 26 22:54:45 crc kubenswrapper[4793]: I0126 22:54:45.961029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:55:06 crc kubenswrapper[4793]: I0126 22:55:06.134863 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.059548 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-66sdc"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.064133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.065940 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.066304 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.068358 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.069267 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.071128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.079541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tm4ps" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.089757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.142800 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-chq2z"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.143763 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.145744 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.146110 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7qgrl" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.146431 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.146968 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.154758 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hbsm9"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.155705 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.161870 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.167106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hbsm9"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229412 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-sockets\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229487 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-reloader\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics-certs\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229583 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57471441-2abf-4120-8a8f-e7dffa12be10-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6tdf\" (UniqueName: \"kubernetes.io/projected/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-kube-api-access-x6tdf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229669 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjx5\" (UniqueName: \"kubernetes.io/projected/57471441-2abf-4120-8a8f-e7dffa12be10-kube-api-access-prjx5\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-startup\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-conf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330864 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-metrics-certs\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330915 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics-certs\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57471441-2abf-4120-8a8f-e7dffa12be10-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6tdf\" (UniqueName: \"kubernetes.io/projected/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-kube-api-access-x6tdf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjx5\" (UniqueName: \"kubernetes.io/projected/57471441-2abf-4120-8a8f-e7dffa12be10-kube-api-access-prjx5\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331257 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-startup\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-conf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-metrics-certs\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331493 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xzt\" (UniqueName: \"kubernetes.io/projected/3c375207-c4cf-4638-af29-710918c39e54-kube-api-access-x9xzt\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-sockets\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-cert\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331730 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3c375207-c4cf-4638-af29-710918c39e54-metallb-excludel2\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331805 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wpft\" (UniqueName: \"kubernetes.io/projected/9b9487b8-aed6-466b-a801-8f826dc81687-kube-api-access-8wpft\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-reloader\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.332721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-reloader\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.333407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-conf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.333468 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-sockets\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.333556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.334349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-startup\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.338926 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics-certs\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.349358 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57471441-2abf-4120-8a8f-e7dffa12be10-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.354943 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjx5\" (UniqueName: \"kubernetes.io/projected/57471441-2abf-4120-8a8f-e7dffa12be10-kube-api-access-prjx5\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.356852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6tdf\" (UniqueName: \"kubernetes.io/projected/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-kube-api-access-x6tdf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.382935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.394985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-metrics-certs\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432902 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xzt\" (UniqueName: \"kubernetes.io/projected/3c375207-c4cf-4638-af29-710918c39e54-kube-api-access-x9xzt\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-cert\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3c375207-c4cf-4638-af29-710918c39e54-metallb-excludel2\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wpft\" (UniqueName: \"kubernetes.io/projected/9b9487b8-aed6-466b-a801-8f826dc81687-kube-api-access-8wpft\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.433020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-metrics-certs\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.433037 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.433160 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.433242 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist podName:3c375207-c4cf-4638-af29-710918c39e54 nodeName:}" failed. No retries permitted until 2026-01-26 22:55:07.933223349 +0000 UTC m=+922.921994861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist") pod "speaker-chq2z" (UID: "3c375207-c4cf-4638-af29-710918c39e54") : secret "metallb-memberlist" not found Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.434351 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3c375207-c4cf-4638-af29-710918c39e54-metallb-excludel2\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.439884 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.441180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-metrics-certs\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.442583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-metrics-certs\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.450806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xzt\" (UniqueName: \"kubernetes.io/projected/3c375207-c4cf-4638-af29-710918c39e54-kube-api-access-x9xzt\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.451868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-cert\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.462290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wpft\" (UniqueName: \"kubernetes.io/projected/9b9487b8-aed6-466b-a801-8f826dc81687-kube-api-access-8wpft\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.471467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.797785 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hbsm9"] Jan 26 22:55:07 crc kubenswrapper[4793]: W0126 22:55:07.807688 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b9487b8_aed6_466b_a801_8f826dc81687.slice/crio-f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1 WatchSource:0}: Error finding container f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1: Status 404 returned error can't find the container with id f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1 Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.897770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr"] Jan 26 22:55:07 crc kubenswrapper[4793]: W0126 22:55:07.902325 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57471441_2abf_4120_8a8f_e7dffa12be10.slice/crio-a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571 WatchSource:0}: Error finding container a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571: Status 404 returned error can't find the container with id a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571 Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.942984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.943207 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.943288 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist podName:3c375207-c4cf-4638-af29-710918c39e54 nodeName:}" failed. No retries permitted until 2026-01-26 22:55:08.943265791 +0000 UTC m=+923.932037313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist") pod "speaker-chq2z" (UID: "3c375207-c4cf-4638-af29-710918c39e54") : secret "metallb-memberlist" not found Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.430612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" event={"ID":"57471441-2abf-4120-8a8f-e7dffa12be10","Type":"ContainerStarted","Data":"a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.433181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"c203c34fb97c8caea2a74dc05bc64f34665a3068b3f84d722f8a073712363c58"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436452 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hbsm9" event={"ID":"9b9487b8-aed6-466b-a801-8f826dc81687","Type":"ContainerStarted","Data":"f8d969089626ac4c9e032d86c0ea2a00fa5a16f4dbe7f72874e8c4b89c886e63"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hbsm9" event={"ID":"9b9487b8-aed6-466b-a801-8f826dc81687","Type":"ContainerStarted","Data":"f354d7690c9d354b7e9123181963f5cf1fb1be6800dbc95336d8262f7a03e466"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436545 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hbsm9" event={"ID":"9b9487b8-aed6-466b-a801-8f826dc81687","Type":"ContainerStarted","Data":"f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436663 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.452376 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hbsm9" podStartSLOduration=1.4523536369999999 podStartE2EDuration="1.452353637s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:55:08.451615716 +0000 UTC m=+923.440387238" watchObservedRunningTime="2026-01-26 22:55:08.452353637 +0000 UTC m=+923.441125159" Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.956014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.962581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:09 crc kubenswrapper[4793]: I0126 22:55:09.258165 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-chq2z" Jan 26 22:55:09 crc kubenswrapper[4793]: W0126 22:55:09.313480 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c375207_c4cf_4638_af29_710918c39e54.slice/crio-2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b WatchSource:0}: Error finding container 2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b: Status 404 returned error can't find the container with id 2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b Jan 26 22:55:09 crc kubenswrapper[4793]: I0126 22:55:09.477825 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-chq2z" event={"ID":"3c375207-c4cf-4638-af29-710918c39e54","Type":"ContainerStarted","Data":"2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b"} Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.486600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-chq2z" event={"ID":"3c375207-c4cf-4638-af29-710918c39e54","Type":"ContainerStarted","Data":"dd8fccd79f9b1e0f325bb5a61c902b34406cc6d0f42bf1a375b975aa30a561ab"} Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.487104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-chq2z" Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.487117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-chq2z" event={"ID":"3c375207-c4cf-4638-af29-710918c39e54","Type":"ContainerStarted","Data":"81e415b9c188e6844c1c28642eb3acaaca04c194f760ca755eb63e4f7d7463bf"} Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.509518 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-chq2z" podStartSLOduration=3.509493407 podStartE2EDuration="3.509493407s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:55:10.504579378 +0000 UTC m=+925.493350890" watchObservedRunningTime="2026-01-26 22:55:10.509493407 +0000 UTC m=+925.498264919" Jan 26 22:55:17 crc kubenswrapper[4793]: I0126 22:55:17.475250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.323245 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.323778 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.535166 4793 generic.go:334] "Generic (PLEG): container finished" podID="24ee4fb9-196a-45d2-a9b6-d446b5d92dd8" containerID="30347001381e6f4d0ce19e9e3028904a7c222b19b777f9df39a5f9e2e0ee6bc3" exitCode=0 Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.535316 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerDied","Data":"30347001381e6f4d0ce19e9e3028904a7c222b19b777f9df39a5f9e2e0ee6bc3"} Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.536981 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" event={"ID":"57471441-2abf-4120-8a8f-e7dffa12be10","Type":"ContainerStarted","Data":"01dc9d52f6525f0367a67286e9fb1910511592b14dff48a1e7dec9573458e1f9"} Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.537171 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.617777 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" podStartSLOduration=1.940486342 podStartE2EDuration="11.617747098s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="2026-01-26 22:55:07.905330601 +0000 UTC m=+922.894102113" lastFinishedPulling="2026-01-26 22:55:17.582591357 +0000 UTC m=+932.571362869" observedRunningTime="2026-01-26 22:55:18.611015448 +0000 UTC m=+933.599786970" watchObservedRunningTime="2026-01-26 22:55:18.617747098 +0000 UTC m=+933.606518620" Jan 26 22:55:19 crc kubenswrapper[4793]: I0126 22:55:19.263029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-chq2z" Jan 26 22:55:19 crc kubenswrapper[4793]: I0126 22:55:19.545436 4793 generic.go:334] "Generic (PLEG): container finished" podID="24ee4fb9-196a-45d2-a9b6-d446b5d92dd8" containerID="658052373ef6c202aa458c2b673c10303a40492d4046364e497ca2660f07ac44" exitCode=0 Jan 26 22:55:19 crc kubenswrapper[4793]: I0126 22:55:19.545525 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerDied","Data":"658052373ef6c202aa458c2b673c10303a40492d4046364e497ca2660f07ac44"} Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.564061 4793 generic.go:334] "Generic (PLEG): container finished" podID="24ee4fb9-196a-45d2-a9b6-d446b5d92dd8" containerID="6615e117ae9fb89bec524a5e42dbcc955c0f5fd4e61e3b8f857a1423d7f43179" exitCode=0 Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.564283 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerDied","Data":"6615e117ae9fb89bec524a5e42dbcc955c0f5fd4e61e3b8f857a1423d7f43179"} Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.790581 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8"] Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.796468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.799134 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8"] Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.860868 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.966893 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.966947 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.966977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.067971 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068047 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068640 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.090164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.188483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.450893 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8"] Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.573895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerStarted","Data":"087f60b89e9c3bf70f0db40bd188ccc41371960bd28b5be9bc6386af1bd78af4"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"cf512aba77433d1ce19bc40a93d0513776d7cde9b6768a99265f18a702fa64fc"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"91c5f343a37d9143145167ec75f34335e9dc5af94a83acc6fe25585a718e0a25"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578413 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"dd18a37f7f0065635cf9a3e4a4a2bf7b544cdf78daa2b25dff2f32412b98ba74"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"5c516c417742fa5e3c802567021e583ceab1bdf2fe2397620381c370731deaa5"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"0ef7ca1796042d463cd64d5bfbe2b5b3f4dd2ce937fec9c53b4ace7f60bf5089"} Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.590635 4793 generic.go:334] "Generic (PLEG): container finished" podID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerID="4e2248746c05716dd7e7cc4274ca2edfd95c254887c340f14588c7df186c2edf" exitCode=0 Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.590754 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"4e2248746c05716dd7e7cc4274ca2edfd95c254887c340f14588c7df186c2edf"} Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.601266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"41a5e52579fbf9b4fcb0318abba4aed9b0c5bd39ba97f909229531cacb3922b3"} Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.601692 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.658527 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-66sdc" podStartSLOduration=5.656503413 podStartE2EDuration="15.658497929s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="2026-01-26 22:55:07.589843452 +0000 UTC m=+922.578614964" lastFinishedPulling="2026-01-26 22:55:17.591837968 +0000 UTC m=+932.580609480" observedRunningTime="2026-01-26 22:55:22.657745208 +0000 UTC m=+937.646516720" watchObservedRunningTime="2026-01-26 22:55:22.658497929 +0000 UTC m=+937.647269441" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.889967 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.893718 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.911484 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.936069 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.936175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.936258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.038001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.038065 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.038118 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.039584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.039646 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.072041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.232337 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.232558 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.645031 4793 generic.go:334] "Generic (PLEG): container finished" podID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerID="f180818f9da9a3ea1fa913a362222c8a8d7909c3db4aab4a331adb7d66ccf614" exitCode=0 Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.645113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"f180818f9da9a3ea1fa913a362222c8a8d7909c3db4aab4a331adb7d66ccf614"} Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.645470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerStarted","Data":"89b8de0329ef0472c524d45732bc1f1e1b3e7868092ebdc86e0346a811451edc"} Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.650009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"3e77a6cea9fe8ca317e94455a6257644cb6b9204630b3f994cc667f96c47bc19"} Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.649952 4793 generic.go:334] "Generic (PLEG): container finished" podID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerID="3e77a6cea9fe8ca317e94455a6257644cb6b9204630b3f994cc667f96c47bc19" exitCode=0 Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.384094 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.403397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.456659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.657828 4793 generic.go:334] "Generic (PLEG): container finished" podID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerID="7b570c88f8123b212e277e16f5633cc71221bbbf5f2e843301d4ffb90ae96723" exitCode=0 Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.657940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"7b570c88f8123b212e277e16f5633cc71221bbbf5f2e843301d4ffb90ae96723"} Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.668645 4793 generic.go:334] "Generic (PLEG): container finished" podID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerID="c71a16f468753e6b6592c9c3c29cfd096037e2a3a1698a31b7f14e70bfe4f910" exitCode=0 Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.668765 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"c71a16f468753e6b6592c9c3c29cfd096037e2a3a1698a31b7f14e70bfe4f910"} Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.978727 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.999874 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"be420288-3c8c-46cd-8058-b091fc8a7eec\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.999970 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"be420288-3c8c-46cd-8058-b091fc8a7eec\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.999994 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"be420288-3c8c-46cd-8058-b091fc8a7eec\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:29.000827 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle" (OuterVolumeSpecName: "bundle") pod "be420288-3c8c-46cd-8058-b091fc8a7eec" (UID: "be420288-3c8c-46cd-8058-b091fc8a7eec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.006646 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk" (OuterVolumeSpecName: "kube-api-access-2f4rk") pod "be420288-3c8c-46cd-8058-b091fc8a7eec" (UID: "be420288-3c8c-46cd-8058-b091fc8a7eec"). InnerVolumeSpecName "kube-api-access-2f4rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.014103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util" (OuterVolumeSpecName: "util") pod "be420288-3c8c-46cd-8058-b091fc8a7eec" (UID: "be420288-3c8c-46cd-8058-b091fc8a7eec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.104058 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.104106 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.104118 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.677918 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.678037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"087f60b89e9c3bf70f0db40bd188ccc41371960bd28b5be9bc6386af1bd78af4"} Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.678591 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="087f60b89e9c3bf70f0db40bd188ccc41371960bd28b5be9bc6386af1bd78af4" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.680616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerStarted","Data":"49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178"} Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.703346 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k7chh" podStartSLOduration=3.030906137 podStartE2EDuration="5.703326181s" podCreationTimestamp="2026-01-26 22:55:24 +0000 UTC" firstStartedPulling="2026-01-26 22:55:26.647074179 +0000 UTC m=+941.635845691" lastFinishedPulling="2026-01-26 22:55:29.319494213 +0000 UTC m=+944.308265735" observedRunningTime="2026-01-26 22:55:29.700130101 +0000 UTC m=+944.688901653" watchObservedRunningTime="2026-01-26 22:55:29.703326181 +0000 UTC m=+944.692097703" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.353002 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9"] Jan 26 22:55:33 crc kubenswrapper[4793]: E0126 22:55:33.354769 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="pull" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.354793 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="pull" Jan 26 22:55:33 crc kubenswrapper[4793]: E0126 22:55:33.354818 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="util" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.354825 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="util" Jan 26 22:55:33 crc kubenswrapper[4793]: E0126 22:55:33.354838 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="extract" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.354845 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="extract" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.355081 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="extract" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.355641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.359477 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.359522 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-n54sv" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.361236 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.369824 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9"] Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.455746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.455836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6hh\" (UniqueName: \"kubernetes.io/projected/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-kube-api-access-fg6hh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.557028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.557796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg6hh\" (UniqueName: \"kubernetes.io/projected/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-kube-api-access-fg6hh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.557723 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.586790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg6hh\" (UniqueName: \"kubernetes.io/projected/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-kube-api-access-fg6hh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.723036 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.948375 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9"] Jan 26 22:55:33 crc kubenswrapper[4793]: W0126 22:55:33.967313 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9b05a4_dacb_41bf_ade5_d28fdc391d9b.slice/crio-c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df WatchSource:0}: Error finding container c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df: Status 404 returned error can't find the container with id c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df Jan 26 22:55:34 crc kubenswrapper[4793]: I0126 22:55:34.717284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" event={"ID":"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b","Type":"ContainerStarted","Data":"c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df"} Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.232776 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.233444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.297648 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.768604 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:37 crc kubenswrapper[4793]: I0126 22:55:37.385315 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:37 crc kubenswrapper[4793]: I0126 22:55:37.676427 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:38 crc kubenswrapper[4793]: I0126 22:55:38.749470 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k7chh" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" containerID="cri-o://49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178" gracePeriod=2 Jan 26 22:55:40 crc kubenswrapper[4793]: I0126 22:55:40.763468 4793 generic.go:334] "Generic (PLEG): container finished" podID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerID="49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178" exitCode=0 Jan 26 22:55:40 crc kubenswrapper[4793]: I0126 22:55:40.763504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178"} Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.655064 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.770948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"89b8de0329ef0472c524d45732bc1f1e1b3e7868092ebdc86e0346a811451edc"} Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.771005 4793 scope.go:117] "RemoveContainer" containerID="49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.771154 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.774245 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" event={"ID":"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b","Type":"ContainerStarted","Data":"8bb3993b16a13ea0ba1ab168af5aeed083b0fa0fc9c2902fe9734ff217b7a2e9"} Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.778438 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.778497 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.778662 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.779972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities" (OuterVolumeSpecName: "utilities") pod "489fdb41-1d60-4db7-a2de-a3829fc71c0c" (UID: "489fdb41-1d60-4db7-a2de-a3829fc71c0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.790507 4793 scope.go:117] "RemoveContainer" containerID="c71a16f468753e6b6592c9c3c29cfd096037e2a3a1698a31b7f14e70bfe4f910" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.801557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp" (OuterVolumeSpecName: "kube-api-access-zvmrp") pod "489fdb41-1d60-4db7-a2de-a3829fc71c0c" (UID: "489fdb41-1d60-4db7-a2de-a3829fc71c0c"). InnerVolumeSpecName "kube-api-access-zvmrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.802586 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" podStartSLOduration=1.3009408599999999 podStartE2EDuration="8.80256543s" podCreationTimestamp="2026-01-26 22:55:33 +0000 UTC" firstStartedPulling="2026-01-26 22:55:33.96914471 +0000 UTC m=+948.957916222" lastFinishedPulling="2026-01-26 22:55:41.47076927 +0000 UTC m=+956.459540792" observedRunningTime="2026-01-26 22:55:41.800504322 +0000 UTC m=+956.789275844" watchObservedRunningTime="2026-01-26 22:55:41.80256543 +0000 UTC m=+956.791336962" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.832635 4793 scope.go:117] "RemoveContainer" containerID="f180818f9da9a3ea1fa913a362222c8a8d7909c3db4aab4a331adb7d66ccf614" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.855318 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "489fdb41-1d60-4db7-a2de-a3829fc71c0c" (UID: "489fdb41-1d60-4db7-a2de-a3829fc71c0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.880596 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.880626 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.880638 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:42 crc kubenswrapper[4793]: I0126 22:55:42.120933 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:42 crc kubenswrapper[4793]: I0126 22:55:42.125379 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:43 crc kubenswrapper[4793]: I0126 22:55:43.769298 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" path="/var/lib/kubelet/pods/489fdb41-1d60-4db7-a2de-a3829fc71c0c/volumes" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484036 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:44 crc kubenswrapper[4793]: E0126 22:55:44.484624 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484644 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" Jan 26 22:55:44 crc kubenswrapper[4793]: E0126 22:55:44.484655 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-content" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484662 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-content" Jan 26 22:55:44 crc kubenswrapper[4793]: E0126 22:55:44.484675 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-utilities" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484681 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-utilities" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484785 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.485570 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.506997 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.617605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.617672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.617726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.719400 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.719520 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.719548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.720076 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.720458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.742407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.804353 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.280402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.423100 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdzqz"] Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.424164 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.425896 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-96qfn" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.428581 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.428880 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.428971 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6lqp\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-kube-api-access-p6lqp\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.429097 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.438487 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdzqz"] Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.530160 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.530744 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6lqp\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-kube-api-access-p6lqp\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.550462 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6lqp\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-kube-api-access-p6lqp\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.551707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.741802 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-96qfn" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.751073 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.826380 4793 generic.go:334] "Generic (PLEG): container finished" podID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" exitCode=0 Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.826459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845"} Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.826534 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerStarted","Data":"1016701f831451a8c8076fd22bf204f8d806deb0be35568f4235399f90ca0410"} Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.942980 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdzqz"] Jan 26 22:55:46 crc kubenswrapper[4793]: I0126 22:55:46.836808 4793 generic.go:334] "Generic (PLEG): container finished" podID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" exitCode=0 Jan 26 22:55:46 crc kubenswrapper[4793]: I0126 22:55:46.836920 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732"} Jan 26 22:55:46 crc kubenswrapper[4793]: I0126 22:55:46.838865 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" event={"ID":"07849a03-e9ab-4c1f-9157-cef15ba05e6b","Type":"ContainerStarted","Data":"f645b952011f28614efc2846c80acbfbd0b054285e1b20da6a86aee83dfa31a7"} Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.208308 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr"] Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.209482 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.215657 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fj7kv" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.230554 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr"] Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.278359 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.278417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxss\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-kube-api-access-ckxss\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.322915 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.322987 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.379925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.379977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxss\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-kube-api-access-ckxss\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.398041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.398667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxss\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-kube-api-access-ckxss\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.527839 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.865683 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerStarted","Data":"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0"} Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.890441 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdpc8" podStartSLOduration=2.3857182 podStartE2EDuration="4.890415587s" podCreationTimestamp="2026-01-26 22:55:44 +0000 UTC" firstStartedPulling="2026-01-26 22:55:45.828452043 +0000 UTC m=+960.817223555" lastFinishedPulling="2026-01-26 22:55:48.33314943 +0000 UTC m=+963.321920942" observedRunningTime="2026-01-26 22:55:48.889054218 +0000 UTC m=+963.877825730" watchObservedRunningTime="2026-01-26 22:55:48.890415587 +0000 UTC m=+963.879187099" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.935510 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr"] Jan 26 22:55:48 crc kubenswrapper[4793]: W0126 22:55:48.945220 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a7f0bf_2977_4ed6_8925_8eb8171a8e1d.slice/crio-c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b WatchSource:0}: Error finding container c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b: Status 404 returned error can't find the container with id c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b Jan 26 22:55:49 crc kubenswrapper[4793]: I0126 22:55:49.874284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" event={"ID":"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d","Type":"ContainerStarted","Data":"c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b"} Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.805325 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.805730 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.871643 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.968984 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.933787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" event={"ID":"07849a03-e9ab-4c1f-9157-cef15ba05e6b","Type":"ContainerStarted","Data":"14f8735381b7c3ddb190b3640f28ebbfa2e06990c1529cfad31897c54b520b02"} Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.933869 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.937022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" event={"ID":"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d","Type":"ContainerStarted","Data":"aa6a53fef38b20535c55cb0b66ff94f2caa3f80be60ab2dfe5e2f5499fd35aff"} Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.967041 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" podStartSLOduration=2.017948573 podStartE2EDuration="10.967021466s" podCreationTimestamp="2026-01-26 22:55:45 +0000 UTC" firstStartedPulling="2026-01-26 22:55:45.952074434 +0000 UTC m=+960.940845946" lastFinishedPulling="2026-01-26 22:55:54.901147327 +0000 UTC m=+969.889918839" observedRunningTime="2026-01-26 22:55:55.963547138 +0000 UTC m=+970.952318650" watchObservedRunningTime="2026-01-26 22:55:55.967021466 +0000 UTC m=+970.955792978" Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.987152 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" podStartSLOduration=2.031990063 podStartE2EDuration="7.987128524s" podCreationTimestamp="2026-01-26 22:55:48 +0000 UTC" firstStartedPulling="2026-01-26 22:55:48.947843178 +0000 UTC m=+963.936614690" lastFinishedPulling="2026-01-26 22:55:54.902981639 +0000 UTC m=+969.891753151" observedRunningTime="2026-01-26 22:55:55.982282157 +0000 UTC m=+970.971053669" watchObservedRunningTime="2026-01-26 22:55:55.987128524 +0000 UTC m=+970.975900036" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.274413 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.274667 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdpc8" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" containerID="cri-o://a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" gracePeriod=2 Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.767291 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.938254 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"e13b3efd-5989-4b1b-a165-5bb471b1072c\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.939535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"e13b3efd-5989-4b1b-a165-5bb471b1072c\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.941220 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities" (OuterVolumeSpecName: "utilities") pod "e13b3efd-5989-4b1b-a165-5bb471b1072c" (UID: "e13b3efd-5989-4b1b-a165-5bb471b1072c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.951557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"e13b3efd-5989-4b1b-a165-5bb471b1072c\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.953315 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.965671 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6" (OuterVolumeSpecName: "kube-api-access-p4rx6") pod "e13b3efd-5989-4b1b-a165-5bb471b1072c" (UID: "e13b3efd-5989-4b1b-a165-5bb471b1072c"). InnerVolumeSpecName "kube-api-access-p4rx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.966984 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e13b3efd-5989-4b1b-a165-5bb471b1072c" (UID: "e13b3efd-5989-4b1b-a165-5bb471b1072c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973619 4793 generic.go:334] "Generic (PLEG): container finished" podID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" exitCode=0 Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0"} Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973814 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"1016701f831451a8c8076fd22bf204f8d806deb0be35568f4235399f90ca0410"} Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973962 4793 scope.go:117] "RemoveContainer" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.003986 4793 scope.go:117] "RemoveContainer" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.021170 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.033966 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.043218 4793 scope.go:117] "RemoveContainer" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.055121 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.055164 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.062604 4793 scope.go:117] "RemoveContainer" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" Jan 26 22:55:59 crc kubenswrapper[4793]: E0126 22:55:59.063076 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0\": container with ID starting with a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0 not found: ID does not exist" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063123 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0"} err="failed to get container status \"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0\": rpc error: code = NotFound desc = could not find container \"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0\": container with ID starting with a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0 not found: ID does not exist" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063159 4793 scope.go:117] "RemoveContainer" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" Jan 26 22:55:59 crc kubenswrapper[4793]: E0126 22:55:59.063721 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732\": container with ID starting with 0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732 not found: ID does not exist" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063748 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732"} err="failed to get container status \"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732\": rpc error: code = NotFound desc = could not find container \"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732\": container with ID starting with 0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732 not found: ID does not exist" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063770 4793 scope.go:117] "RemoveContainer" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" Jan 26 22:55:59 crc kubenswrapper[4793]: E0126 22:55:59.064120 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845\": container with ID starting with b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845 not found: ID does not exist" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.064164 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845"} err="failed to get container status \"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845\": rpc error: code = NotFound desc = could not find container \"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845\": container with ID starting with b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845 not found: ID does not exist" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.772750 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" path="/var/lib/kubelet/pods/e13b3efd-5989-4b1b-a165-5bb471b1072c/volumes" Jan 26 22:56:00 crc kubenswrapper[4793]: I0126 22:56:00.756998 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.536924 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9gdgz"] Jan 26 22:56:04 crc kubenswrapper[4793]: E0126 22:56:04.537623 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-utilities" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537641 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-utilities" Jan 26 22:56:04 crc kubenswrapper[4793]: E0126 22:56:04.537657 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-content" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537665 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-content" Jan 26 22:56:04 crc kubenswrapper[4793]: E0126 22:56:04.537682 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537689 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537814 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.538296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.541167 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-56kh4" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.546569 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9gdgz"] Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.556562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpbd\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-kube-api-access-sqpbd\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.556683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-bound-sa-token\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.657094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-bound-sa-token\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.657482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqpbd\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-kube-api-access-sqpbd\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.676068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-bound-sa-token\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.682013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqpbd\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-kube-api-access-sqpbd\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.886580 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:05 crc kubenswrapper[4793]: I0126 22:56:05.124772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9gdgz"] Jan 26 22:56:06 crc kubenswrapper[4793]: I0126 22:56:06.036450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" event={"ID":"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37","Type":"ContainerStarted","Data":"f5ba0f2c15e6402fe13b2c85e4a396b70035d4818a596b077e3ce6aedaf8a296"} Jan 26 22:56:07 crc kubenswrapper[4793]: I0126 22:56:07.047100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" event={"ID":"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37","Type":"ContainerStarted","Data":"b91c96405b7710da1c629eb43fb767d0760fc199571ce4ce7b16417b602770a1"} Jan 26 22:56:07 crc kubenswrapper[4793]: I0126 22:56:07.074100 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" podStartSLOduration=3.074064446 podStartE2EDuration="3.074064446s" podCreationTimestamp="2026-01-26 22:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:56:07.069283261 +0000 UTC m=+982.058054813" watchObservedRunningTime="2026-01-26 22:56:07.074064446 +0000 UTC m=+982.062835998" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.322509 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.323390 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.323481 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.324544 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.324634 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c" gracePeriod=600 Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.688098 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.689353 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.691099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"openstack-operator-index-cvp27\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.691601 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.692732 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kwftg" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.695618 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.711756 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.792706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"openstack-operator-index-cvp27\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.817227 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"openstack-operator-index-cvp27\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.014231 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.142623 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c" exitCode=0 Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.142835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c"} Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.143180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587"} Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.143263 4793 scope.go:117] "RemoveContainer" containerID="f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37" Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.255092 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:19 crc kubenswrapper[4793]: W0126 22:56:19.258557 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52221c9f_0f2d_4dca_8a2b_92199673793a.slice/crio-da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961 WatchSource:0}: Error finding container da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961: Status 404 returned error can't find the container with id da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961 Jan 26 22:56:20 crc kubenswrapper[4793]: I0126 22:56:20.158975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerStarted","Data":"da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961"} Jan 26 22:56:22 crc kubenswrapper[4793]: I0126 22:56:22.182709 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerStarted","Data":"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050"} Jan 26 22:56:22 crc kubenswrapper[4793]: I0126 22:56:22.208681 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cvp27" podStartSLOduration=2.079958187 podStartE2EDuration="4.208655767s" podCreationTimestamp="2026-01-26 22:56:18 +0000 UTC" firstStartedPulling="2026-01-26 22:56:19.260677052 +0000 UTC m=+994.249448564" lastFinishedPulling="2026-01-26 22:56:21.389374632 +0000 UTC m=+996.378146144" observedRunningTime="2026-01-26 22:56:22.205076556 +0000 UTC m=+997.193848098" watchObservedRunningTime="2026-01-26 22:56:22.208655767 +0000 UTC m=+997.197427309" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.082311 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.199913 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cvp27" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" containerID="cri-o://b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" gracePeriod=2 Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.679754 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j9sn2"] Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.681578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.731498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctf5k\" (UniqueName: \"kubernetes.io/projected/b2f6b1be-cc0d-47e0-8375-704e63bf5a2b-kube-api-access-ctf5k\") pod \"openstack-operator-index-j9sn2\" (UID: \"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b\") " pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.738827 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j9sn2"] Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.740954 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.833037 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"52221c9f-0f2d-4dca-8a2b-92199673793a\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.833347 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctf5k\" (UniqueName: \"kubernetes.io/projected/b2f6b1be-cc0d-47e0-8375-704e63bf5a2b-kube-api-access-ctf5k\") pod \"openstack-operator-index-j9sn2\" (UID: \"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b\") " pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.840355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb" (OuterVolumeSpecName: "kube-api-access-8tmdb") pod "52221c9f-0f2d-4dca-8a2b-92199673793a" (UID: "52221c9f-0f2d-4dca-8a2b-92199673793a"). InnerVolumeSpecName "kube-api-access-8tmdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.855015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctf5k\" (UniqueName: \"kubernetes.io/projected/b2f6b1be-cc0d-47e0-8375-704e63bf5a2b-kube-api-access-ctf5k\") pod \"openstack-operator-index-j9sn2\" (UID: \"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b\") " pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.935251 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.056653 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211575 4793 generic.go:334] "Generic (PLEG): container finished" podID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" exitCode=0 Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211640 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211674 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerDied","Data":"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050"} Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211794 4793 scope.go:117] "RemoveContainer" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211772 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerDied","Data":"da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961"} Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.268596 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.269273 4793 scope.go:117] "RemoveContainer" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" Jan 26 22:56:25 crc kubenswrapper[4793]: E0126 22:56:25.270102 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050\": container with ID starting with b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050 not found: ID does not exist" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.270155 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050"} err="failed to get container status \"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050\": rpc error: code = NotFound desc = could not find container \"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050\": container with ID starting with b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050 not found: ID does not exist" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.280790 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.545756 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j9sn2"] Jan 26 22:56:25 crc kubenswrapper[4793]: W0126 22:56:25.550939 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2f6b1be_cc0d_47e0_8375_704e63bf5a2b.slice/crio-4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655 WatchSource:0}: Error finding container 4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655: Status 404 returned error can't find the container with id 4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655 Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.769973 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" path="/var/lib/kubelet/pods/52221c9f-0f2d-4dca-8a2b-92199673793a/volumes" Jan 26 22:56:26 crc kubenswrapper[4793]: I0126 22:56:26.221844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j9sn2" event={"ID":"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b","Type":"ContainerStarted","Data":"4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655"} Jan 26 22:56:27 crc kubenswrapper[4793]: I0126 22:56:27.233811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j9sn2" event={"ID":"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b","Type":"ContainerStarted","Data":"a4b586543ce07e0ad7433ddd442b27859c07e41fa8d29adaeb1b73bdffeae014"} Jan 26 22:56:27 crc kubenswrapper[4793]: I0126 22:56:27.256795 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j9sn2" podStartSLOduration=2.452412491 podStartE2EDuration="3.256772675s" podCreationTimestamp="2026-01-26 22:56:24 +0000 UTC" firstStartedPulling="2026-01-26 22:56:25.554347972 +0000 UTC m=+1000.543119484" lastFinishedPulling="2026-01-26 22:56:26.358708156 +0000 UTC m=+1001.347479668" observedRunningTime="2026-01-26 22:56:27.252165685 +0000 UTC m=+1002.240937237" watchObservedRunningTime="2026-01-26 22:56:27.256772675 +0000 UTC m=+1002.245544187" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.056975 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.058026 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.096059 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.331050 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.153682 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf"] Jan 26 22:56:37 crc kubenswrapper[4793]: E0126 22:56:37.154822 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.154855 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.155122 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.157079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.161751 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-97km8" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.169294 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf"] Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.332753 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.332794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.332855 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.457644 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.530894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.745662 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf"] Jan 26 22:56:38 crc kubenswrapper[4793]: I0126 22:56:38.330028 4793 generic.go:334] "Generic (PLEG): container finished" podID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerID="3ebc6d238faa7316ad00ace81fc8416b6a8e4b12e5066d0517ece15124d92e53" exitCode=0 Jan 26 22:56:38 crc kubenswrapper[4793]: I0126 22:56:38.330256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"3ebc6d238faa7316ad00ace81fc8416b6a8e4b12e5066d0517ece15124d92e53"} Jan 26 22:56:38 crc kubenswrapper[4793]: I0126 22:56:38.331859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerStarted","Data":"247f41f3a3cf823a6d43a3ea44330009eff99966da1a98a548112762fa16ab18"} Jan 26 22:56:39 crc kubenswrapper[4793]: I0126 22:56:39.345748 4793 generic.go:334] "Generic (PLEG): container finished" podID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerID="77e9f2a55841c6f32c6ebcee84cfca82631d548703243564d2625739d6cd45a2" exitCode=0 Jan 26 22:56:39 crc kubenswrapper[4793]: I0126 22:56:39.345826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"77e9f2a55841c6f32c6ebcee84cfca82631d548703243564d2625739d6cd45a2"} Jan 26 22:56:40 crc kubenswrapper[4793]: I0126 22:56:40.356113 4793 generic.go:334] "Generic (PLEG): container finished" podID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerID="aa64eebc8908f22f6ef5b8bd88933134c568a3b0aadc2266bc47dbe357678b2f" exitCode=0 Jan 26 22:56:40 crc kubenswrapper[4793]: I0126 22:56:40.356164 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"aa64eebc8908f22f6ef5b8bd88933134c568a3b0aadc2266bc47dbe357678b2f"} Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.665172 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.819185 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"fcc64fc2-d867-4a08-8df7-0464de3c133e\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.819479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"fcc64fc2-d867-4a08-8df7-0464de3c133e\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.819555 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"fcc64fc2-d867-4a08-8df7-0464de3c133e\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.821110 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle" (OuterVolumeSpecName: "bundle") pod "fcc64fc2-d867-4a08-8df7-0464de3c133e" (UID: "fcc64fc2-d867-4a08-8df7-0464de3c133e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.828583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr" (OuterVolumeSpecName: "kube-api-access-4vjrr") pod "fcc64fc2-d867-4a08-8df7-0464de3c133e" (UID: "fcc64fc2-d867-4a08-8df7-0464de3c133e"). InnerVolumeSpecName "kube-api-access-4vjrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.856477 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util" (OuterVolumeSpecName: "util") pod "fcc64fc2-d867-4a08-8df7-0464de3c133e" (UID: "fcc64fc2-d867-4a08-8df7-0464de3c133e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.922058 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.922118 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.922139 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:42 crc kubenswrapper[4793]: I0126 22:56:42.379411 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"247f41f3a3cf823a6d43a3ea44330009eff99966da1a98a548112762fa16ab18"} Jan 26 22:56:42 crc kubenswrapper[4793]: I0126 22:56:42.379492 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247f41f3a3cf823a6d43a3ea44330009eff99966da1a98a548112762fa16ab18" Jan 26 22:56:42 crc kubenswrapper[4793]: I0126 22:56:42.379530 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569090 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 22:56:44 crc kubenswrapper[4793]: E0126 22:56:44.569742 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="pull" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569756 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="pull" Jan 26 22:56:44 crc kubenswrapper[4793]: E0126 22:56:44.569763 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="util" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569770 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="util" Jan 26 22:56:44 crc kubenswrapper[4793]: E0126 22:56:44.569795 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="extract" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569802 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="extract" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569908 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="extract" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.570334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.572916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-stxpn" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.602270 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.767545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"openstack-operator-controller-init-844f6594fb-khtqs\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.868928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"openstack-operator-controller-init-844f6594fb-khtqs\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.899112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"openstack-operator-controller-init-844f6594fb-khtqs\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:45 crc kubenswrapper[4793]: I0126 22:56:45.187910 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:45 crc kubenswrapper[4793]: I0126 22:56:45.710665 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 22:56:46 crc kubenswrapper[4793]: I0126 22:56:46.414101 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerStarted","Data":"e948222ad969867dfca2ac3068c15d29f9c21e465ef410e50da6d8c9a179da98"} Jan 26 22:56:50 crc kubenswrapper[4793]: I0126 22:56:50.483899 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerStarted","Data":"4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d"} Jan 26 22:56:50 crc kubenswrapper[4793]: I0126 22:56:50.484471 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:50 crc kubenswrapper[4793]: I0126 22:56:50.542583 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" podStartSLOduration=2.8115100809999998 podStartE2EDuration="6.542561039s" podCreationTimestamp="2026-01-26 22:56:44 +0000 UTC" firstStartedPulling="2026-01-26 22:56:45.722091778 +0000 UTC m=+1020.710863290" lastFinishedPulling="2026-01-26 22:56:49.453142746 +0000 UTC m=+1024.441914248" observedRunningTime="2026-01-26 22:56:50.529210992 +0000 UTC m=+1025.517982504" watchObservedRunningTime="2026-01-26 22:56:50.542561039 +0000 UTC m=+1025.531332561" Jan 26 22:56:55 crc kubenswrapper[4793]: I0126 22:56:55.191504 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.305210 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-542qx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.306894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.310925 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-v44tz" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.347413 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-542qx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.397042 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.398132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.404715 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lsvqx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.408491 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.414721 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.415801 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.419328 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kl6l4" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.419764 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.420951 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.423938 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xrlt8" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.425249 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.426230 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.426377 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxstf\" (UniqueName: \"kubernetes.io/projected/71c7d600-2dac-4e19-a33f-0311a8342774-kube-api-access-gxstf\") pod \"barbican-operator-controller-manager-6987f66698-542qx\" (UID: \"71c7d600-2dac-4e19-a33f-0311a8342774\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.430467 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.431589 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.437764 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.438407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tmbhr" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.438735 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-49lns" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.443275 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.453616 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.454782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.456659 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.457042 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lxnhg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.480487 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.496480 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tjx\" (UniqueName: \"kubernetes.io/projected/58fa089e-6ccd-4521-88ff-e65e6928b738-kube-api-access-68tjx\") pod \"cinder-operator-controller-manager-655bf9cfbb-dgk2z\" (UID: \"58fa089e-6ccd-4521-88ff-e65e6928b738\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr5s\" (UniqueName: \"kubernetes.io/projected/42a387a4-faad-41fe-bfa9-0f600a06e6e0-kube-api-access-4gr5s\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkwjg\" (UID: \"42a387a4-faad-41fe-bfa9-0f600a06e6e0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg46x\" (UniqueName: \"kubernetes.io/projected/f15117fc-93f9-498b-b831-e87094aa991e-kube-api-access-vg46x\") pod \"glance-operator-controller-manager-67dd55ff59-cpwtx\" (UID: \"f15117fc-93f9-498b-b831-e87094aa991e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531303 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2zf\" (UniqueName: \"kubernetes.io/projected/18b41715-6c23-4821-b0da-1c17b5010375-kube-api-access-fg2zf\") pod \"heat-operator-controller-manager-954b94f75-fnjqt\" (UID: \"18b41715-6c23-4821-b0da-1c17b5010375\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrt7k\" (UniqueName: \"kubernetes.io/projected/eb8ed64e-38aa-4e1f-be80-29be415125fd-kube-api-access-xrt7k\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531372 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfknp\" (UniqueName: \"kubernetes.io/projected/4190249d-f34c-4b1a-a4da-038f7a806fc6-kube-api-access-wfknp\") pod \"designate-operator-controller-manager-77554cdc5c-vnt9q\" (UID: \"4190249d-f34c-4b1a-a4da-038f7a806fc6\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531413 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxstf\" (UniqueName: \"kubernetes.io/projected/71c7d600-2dac-4e19-a33f-0311a8342774-kube-api-access-gxstf\") pod \"barbican-operator-controller-manager-6987f66698-542qx\" (UID: \"71c7d600-2dac-4e19-a33f-0311a8342774\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.543250 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.582589 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.583384 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.583520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.585687 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxstf\" (UniqueName: \"kubernetes.io/projected/71c7d600-2dac-4e19-a33f-0311a8342774-kube-api-access-gxstf\") pod \"barbican-operator-controller-manager-6987f66698-542qx\" (UID: \"71c7d600-2dac-4e19-a33f-0311a8342774\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.591591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kw6rf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.595164 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.596283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.598937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6gjfm" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.612541 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.629241 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.630378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tjx\" (UniqueName: \"kubernetes.io/projected/58fa089e-6ccd-4521-88ff-e65e6928b738-kube-api-access-68tjx\") pod \"cinder-operator-controller-manager-655bf9cfbb-dgk2z\" (UID: \"58fa089e-6ccd-4521-88ff-e65e6928b738\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633941 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr5s\" (UniqueName: \"kubernetes.io/projected/42a387a4-faad-41fe-bfa9-0f600a06e6e0-kube-api-access-4gr5s\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkwjg\" (UID: \"42a387a4-faad-41fe-bfa9-0f600a06e6e0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg46x\" (UniqueName: \"kubernetes.io/projected/f15117fc-93f9-498b-b831-e87094aa991e-kube-api-access-vg46x\") pod \"glance-operator-controller-manager-67dd55ff59-cpwtx\" (UID: \"f15117fc-93f9-498b-b831-e87094aa991e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2zf\" (UniqueName: \"kubernetes.io/projected/18b41715-6c23-4821-b0da-1c17b5010375-kube-api-access-fg2zf\") pod \"heat-operator-controller-manager-954b94f75-fnjqt\" (UID: \"18b41715-6c23-4821-b0da-1c17b5010375\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9g2\" (UniqueName: \"kubernetes.io/projected/22d5cae5-26fb-4a47-97ac-78ba7120d29c-kube-api-access-rm9g2\") pod \"keystone-operator-controller-manager-55f684fd56-g2r87\" (UID: \"22d5cae5-26fb-4a47-97ac-78ba7120d29c\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634036 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzf9\" (UniqueName: \"kubernetes.io/projected/bcf4192a-ff9b-4b02-83d0-a17ecd3ba795-kube-api-access-8dzf9\") pod \"ironic-operator-controller-manager-768b776ffb-7twhn\" (UID: \"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrt7k\" (UniqueName: \"kubernetes.io/projected/eb8ed64e-38aa-4e1f-be80-29be415125fd-kube-api-access-xrt7k\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfknp\" (UniqueName: \"kubernetes.io/projected/4190249d-f34c-4b1a-a4da-038f7a806fc6-kube-api-access-wfknp\") pod \"designate-operator-controller-manager-77554cdc5c-vnt9q\" (UID: \"4190249d-f34c-4b1a-a4da-038f7a806fc6\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: E0126 22:57:13.634563 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:13 crc kubenswrapper[4793]: E0126 22:57:13.634643 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:14.134617181 +0000 UTC m=+1049.123388693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.638865 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-874vv" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.657696 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.658907 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.661699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8d7wf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.671220 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr5s\" (UniqueName: \"kubernetes.io/projected/42a387a4-faad-41fe-bfa9-0f600a06e6e0-kube-api-access-4gr5s\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkwjg\" (UID: \"42a387a4-faad-41fe-bfa9-0f600a06e6e0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.676366 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfknp\" (UniqueName: \"kubernetes.io/projected/4190249d-f34c-4b1a-a4da-038f7a806fc6-kube-api-access-wfknp\") pod \"designate-operator-controller-manager-77554cdc5c-vnt9q\" (UID: \"4190249d-f34c-4b1a-a4da-038f7a806fc6\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.680373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.689499 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.707797 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.709149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrt7k\" (UniqueName: \"kubernetes.io/projected/eb8ed64e-38aa-4e1f-be80-29be415125fd-kube-api-access-xrt7k\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.709981 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2zf\" (UniqueName: \"kubernetes.io/projected/18b41715-6c23-4821-b0da-1c17b5010375-kube-api-access-fg2zf\") pod \"heat-operator-controller-manager-954b94f75-fnjqt\" (UID: \"18b41715-6c23-4821-b0da-1c17b5010375\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.710277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg46x\" (UniqueName: \"kubernetes.io/projected/f15117fc-93f9-498b-b831-e87094aa991e-kube-api-access-vg46x\") pod \"glance-operator-controller-manager-67dd55ff59-cpwtx\" (UID: \"f15117fc-93f9-498b-b831-e87094aa991e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.710745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tjx\" (UniqueName: \"kubernetes.io/projected/58fa089e-6ccd-4521-88ff-e65e6928b738-kube-api-access-68tjx\") pod \"cinder-operator-controller-manager-655bf9cfbb-dgk2z\" (UID: \"58fa089e-6ccd-4521-88ff-e65e6928b738\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.719163 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.720154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.722427 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.725696 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lwd7b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.739918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m8m\" (UniqueName: \"kubernetes.io/projected/da1ca3d4-2bae-4133-88c8-b67f74ebc6ab-kube-api-access-l9m8m\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf\" (UID: \"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.739968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm9g2\" (UniqueName: \"kubernetes.io/projected/22d5cae5-26fb-4a47-97ac-78ba7120d29c-kube-api-access-rm9g2\") pod \"keystone-operator-controller-manager-55f684fd56-g2r87\" (UID: \"22d5cae5-26fb-4a47-97ac-78ba7120d29c\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.739989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dzf9\" (UniqueName: \"kubernetes.io/projected/bcf4192a-ff9b-4b02-83d0-a17ecd3ba795-kube-api-access-8dzf9\") pod \"ironic-operator-controller-manager-768b776ffb-7twhn\" (UID: \"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.740041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshpw\" (UniqueName: \"kubernetes.io/projected/c6fcff5b-7bdb-4607-bd4d-389c863ddba6-kube-api-access-pshpw\") pod \"manila-operator-controller-manager-849fcfbb6b-flkdn\" (UID: \"c6fcff5b-7bdb-4607-bd4d-389c863ddba6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.740454 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.745419 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.746398 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.747856 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6mfk4" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.764220 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm9g2\" (UniqueName: \"kubernetes.io/projected/22d5cae5-26fb-4a47-97ac-78ba7120d29c-kube-api-access-rm9g2\") pod \"keystone-operator-controller-manager-55f684fd56-g2r87\" (UID: \"22d5cae5-26fb-4a47-97ac-78ba7120d29c\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.775759 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.778561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.783221 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dzf9\" (UniqueName: \"kubernetes.io/projected/bcf4192a-ff9b-4b02-83d0-a17ecd3ba795-kube-api-access-8dzf9\") pod \"ironic-operator-controller-manager-768b776ffb-7twhn\" (UID: \"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.788127 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.798463 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.798603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.801078 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.809463 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zkdqv" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.810101 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.842900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9m8m\" (UniqueName: \"kubernetes.io/projected/da1ca3d4-2bae-4133-88c8-b67f74ebc6ab-kube-api-access-l9m8m\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf\" (UID: \"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.843083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshpw\" (UniqueName: \"kubernetes.io/projected/c6fcff5b-7bdb-4607-bd4d-389c863ddba6-kube-api-access-pshpw\") pod \"manila-operator-controller-manager-849fcfbb6b-flkdn\" (UID: \"c6fcff5b-7bdb-4607-bd4d-389c863ddba6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.846772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.847769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"nova-operator-controller-manager-5b4fc7b894-55w8b\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.867548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9m8m\" (UniqueName: \"kubernetes.io/projected/da1ca3d4-2bae-4133-88c8-b67f74ebc6ab-kube-api-access-l9m8m\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf\" (UID: \"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.877835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshpw\" (UniqueName: \"kubernetes.io/projected/c6fcff5b-7bdb-4607-bd4d-389c863ddba6-kube-api-access-pshpw\") pod \"manila-operator-controller-manager-849fcfbb6b-flkdn\" (UID: \"c6fcff5b-7bdb-4607-bd4d-389c863ddba6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.917799 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.923467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.923521 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.927630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.929420 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.929446 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q4r9v" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.938449 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.939995 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.942366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7wwlz" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.957116 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2trgc\" (UniqueName: \"kubernetes.io/projected/4f086064-ee5a-47cf-bf97-fc2a423d8c33-kube-api-access-2trgc\") pod \"octavia-operator-controller-manager-756f86fc74-mvr7p\" (UID: \"4f086064-ee5a-47cf-bf97-fc2a423d8c33\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.957926 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psc2m\" (UniqueName: \"kubernetes.io/projected/b934b85e-14a0-4ad2-bdc0-82280eb346a9-kube-api-access-psc2m\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vxk8h\" (UID: \"b934b85e-14a0-4ad2-bdc0-82280eb346a9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.958118 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"nova-operator-controller-manager-5b4fc7b894-55w8b\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.961249 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.962011 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.969427 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.979934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"nova-operator-controller-manager-5b4fc7b894-55w8b\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.980131 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.981393 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.984329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vbwhf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.987650 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.989568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.992084 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c4zkj" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.008301 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.035927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.069355 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070539 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2trgc\" (UniqueName: \"kubernetes.io/projected/4f086064-ee5a-47cf-bf97-fc2a423d8c33-kube-api-access-2trgc\") pod \"octavia-operator-controller-manager-756f86fc74-mvr7p\" (UID: \"4f086064-ee5a-47cf-bf97-fc2a423d8c33\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psc2m\" (UniqueName: \"kubernetes.io/projected/b934b85e-14a0-4ad2-bdc0-82280eb346a9-kube-api-access-psc2m\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vxk8h\" (UID: \"b934b85e-14a0-4ad2-bdc0-82280eb346a9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070623 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwkx\" (UniqueName: \"kubernetes.io/projected/86ca275a-4c50-479e-9ed5-4e03dda309cf-kube-api-access-9pwkx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070702 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcmmm\" (UniqueName: \"kubernetes.io/projected/6e0878fd-53dc-4048-9267-2520c5919067-kube-api-access-bcmmm\") pod \"ovn-operator-controller-manager-6f75f45d54-2jn8d\" (UID: \"6e0878fd-53dc-4048-9267-2520c5919067\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.089157 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.095282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psc2m\" (UniqueName: \"kubernetes.io/projected/b934b85e-14a0-4ad2-bdc0-82280eb346a9-kube-api-access-psc2m\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vxk8h\" (UID: \"b934b85e-14a0-4ad2-bdc0-82280eb346a9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.098012 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.100097 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.102629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2trgc\" (UniqueName: \"kubernetes.io/projected/4f086064-ee5a-47cf-bf97-fc2a423d8c33-kube-api-access-2trgc\") pod \"octavia-operator-controller-manager-756f86fc74-mvr7p\" (UID: \"4f086064-ee5a-47cf-bf97-fc2a423d8c33\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.104761 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pdhml" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.111561 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.120424 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.135418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.153952 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.155161 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.160753 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-b5wd7" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cxj\" (UniqueName: \"kubernetes.io/projected/06401d29-ca17-43df-865a-26523cf84f67-kube-api-access-z8cxj\") pod \"swift-operator-controller-manager-547cbdb99f-rrmcg\" (UID: \"06401d29-ca17-43df-865a-26523cf84f67\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwkx\" (UniqueName: \"kubernetes.io/projected/86ca275a-4c50-479e-9ed5-4e03dda309cf-kube-api-access-9pwkx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171566 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171589 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrf8j\" (UniqueName: \"kubernetes.io/projected/7968980a-764c-4cd2-b77f-ed33fffbd294-kube-api-access-jrf8j\") pod \"placement-operator-controller-manager-79d5ccc684-bp8k8\" (UID: \"7968980a-764c-4cd2-b77f-ed33fffbd294\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcmmm\" (UniqueName: \"kubernetes.io/projected/6e0878fd-53dc-4048-9267-2520c5919067-kube-api-access-bcmmm\") pod \"ovn-operator-controller-manager-6f75f45d54-2jn8d\" (UID: \"6e0878fd-53dc-4048-9267-2520c5919067\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172179 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172250 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:14.672235826 +0000 UTC m=+1049.661007338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172425 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172446 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.172439142 +0000 UTC m=+1050.161210654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.180106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.197672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwkx\" (UniqueName: \"kubernetes.io/projected/86ca275a-4c50-479e-9ed5-4e03dda309cf-kube-api-access-9pwkx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.201661 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcmmm\" (UniqueName: \"kubernetes.io/projected/6e0878fd-53dc-4048-9267-2520c5919067-kube-api-access-bcmmm\") pod \"ovn-operator-controller-manager-6f75f45d54-2jn8d\" (UID: \"6e0878fd-53dc-4048-9267-2520c5919067\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.219137 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.220492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.240789 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-79fv7" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.244730 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.274029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cxj\" (UniqueName: \"kubernetes.io/projected/06401d29-ca17-43df-865a-26523cf84f67-kube-api-access-z8cxj\") pod \"swift-operator-controller-manager-547cbdb99f-rrmcg\" (UID: \"06401d29-ca17-43df-865a-26523cf84f67\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.274365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz8k8\" (UniqueName: \"kubernetes.io/projected/65c6d17b-b112-421c-a686-ce1601f91181-kube-api-access-sz8k8\") pod \"telemetry-operator-controller-manager-799bc87c89-j292m\" (UID: \"65c6d17b-b112-421c-a686-ce1601f91181\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.274491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5gv\" (UniqueName: \"kubernetes.io/projected/6e38b64f-cd6a-4cac-a125-be388bb0dc78-kube-api-access-6l5gv\") pod \"test-operator-controller-manager-69797bbcbd-sgcrb\" (UID: \"6e38b64f-cd6a-4cac-a125-be388bb0dc78\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.278538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrf8j\" (UniqueName: \"kubernetes.io/projected/7968980a-764c-4cd2-b77f-ed33fffbd294-kube-api-access-jrf8j\") pod \"placement-operator-controller-manager-79d5ccc684-bp8k8\" (UID: \"7968980a-764c-4cd2-b77f-ed33fffbd294\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.309065 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.327209 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cxj\" (UniqueName: \"kubernetes.io/projected/06401d29-ca17-43df-865a-26523cf84f67-kube-api-access-z8cxj\") pod \"swift-operator-controller-manager-547cbdb99f-rrmcg\" (UID: \"06401d29-ca17-43df-865a-26523cf84f67\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.327119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrf8j\" (UniqueName: \"kubernetes.io/projected/7968980a-764c-4cd2-b77f-ed33fffbd294-kube-api-access-jrf8j\") pod \"placement-operator-controller-manager-79d5ccc684-bp8k8\" (UID: \"7968980a-764c-4cd2-b77f-ed33fffbd294\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.336838 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.344798 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.346737 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.349096 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.349685 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.350404 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lmwx4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.362392 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.370743 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.375761 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.377403 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.380407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hhsw8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.381014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5gv\" (UniqueName: \"kubernetes.io/projected/6e38b64f-cd6a-4cac-a125-be388bb0dc78-kube-api-access-6l5gv\") pod \"test-operator-controller-manager-69797bbcbd-sgcrb\" (UID: \"6e38b64f-cd6a-4cac-a125-be388bb0dc78\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.381132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7mf\" (UniqueName: \"kubernetes.io/projected/d0e46999-98e6-40a6-99a2-f4c01dad0f81-kube-api-access-pq7mf\") pod \"watcher-operator-controller-manager-cb484894b-qxf7r\" (UID: \"d0e46999-98e6-40a6-99a2-f4c01dad0f81\") " pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.381167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz8k8\" (UniqueName: \"kubernetes.io/projected/65c6d17b-b112-421c-a686-ce1601f91181-kube-api-access-sz8k8\") pod \"telemetry-operator-controller-manager-799bc87c89-j292m\" (UID: \"65c6d17b-b112-421c-a686-ce1601f91181\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.384326 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.412116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz8k8\" (UniqueName: \"kubernetes.io/projected/65c6d17b-b112-421c-a686-ce1601f91181-kube-api-access-sz8k8\") pod \"telemetry-operator-controller-manager-799bc87c89-j292m\" (UID: \"65c6d17b-b112-421c-a686-ce1601f91181\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.449582 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.455381 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5gv\" (UniqueName: \"kubernetes.io/projected/6e38b64f-cd6a-4cac-a125-be388bb0dc78-kube-api-access-6l5gv\") pod \"test-operator-controller-manager-69797bbcbd-sgcrb\" (UID: \"6e38b64f-cd6a-4cac-a125-be388bb0dc78\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.486965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.500581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.501278 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-542qx"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521134 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnn4\" (UniqueName: \"kubernetes.io/projected/d4c8a5bf-4e12-46b4-9d2a-1a098416e90a-kube-api-access-mgnn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vzmsv\" (UID: \"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521275 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzhdf\" (UniqueName: \"kubernetes.io/projected/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-kube-api-access-mzhdf\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7mf\" (UniqueName: \"kubernetes.io/projected/d0e46999-98e6-40a6-99a2-f4c01dad0f81-kube-api-access-pq7mf\") pod \"watcher-operator-controller-manager-cb484894b-qxf7r\" (UID: \"d0e46999-98e6-40a6-99a2-f4c01dad0f81\") " pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521484 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.548944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7mf\" (UniqueName: \"kubernetes.io/projected/d0e46999-98e6-40a6-99a2-f4c01dad0f81-kube-api-access-pq7mf\") pod \"watcher-operator-controller-manager-cb484894b-qxf7r\" (UID: \"d0e46999-98e6-40a6-99a2-f4c01dad0f81\") " pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.557057 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgnn4\" (UniqueName: \"kubernetes.io/projected/d4c8a5bf-4e12-46b4-9d2a-1a098416e90a-kube-api-access-mgnn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vzmsv\" (UID: \"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622427 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzhdf\" (UniqueName: \"kubernetes.io/projected/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-kube-api-access-mzhdf\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622528 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.622670 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.622721 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.122704455 +0000 UTC m=+1050.111475967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.623092 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.623121 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.123114347 +0000 UTC m=+1050.111885859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.651092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.659295 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzhdf\" (UniqueName: \"kubernetes.io/projected/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-kube-api-access-mzhdf\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.659824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgnn4\" (UniqueName: \"kubernetes.io/projected/d4c8a5bf-4e12-46b4-9d2a-1a098416e90a-kube-api-access-mgnn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vzmsv\" (UID: \"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.678558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" event={"ID":"58fa089e-6ccd-4521-88ff-e65e6928b738","Type":"ContainerStarted","Data":"7aa39c08a6f64dc91cd340bd456e5890a6b0398de38cd4767da6c517854781d4"} Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.690613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" event={"ID":"71c7d600-2dac-4e19-a33f-0311a8342774","Type":"ContainerStarted","Data":"e81fc8d96a20c225f4515a3f343ede412ae28484e9ca162a4b0cca92fc969977"} Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.699149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" event={"ID":"4190249d-f34c-4b1a-a4da-038f7a806fc6","Type":"ContainerStarted","Data":"62909fd9103571eb2404966c205ae5e93c923ef4c53d9984df76e142827191ec"} Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.724146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.724453 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.724515 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.72449666 +0000 UTC m=+1050.713268162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.732083 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.776429 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.786043 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.804496 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt"] Jan 26 22:57:14 crc kubenswrapper[4793]: W0126 22:57:14.834805 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a387a4_faad_41fe_bfa9_0f600a06e6e0.slice/crio-7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c WatchSource:0}: Error finding container 7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c: Status 404 returned error can't find the container with id 7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.921855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf"] Jan 26 22:57:14 crc kubenswrapper[4793]: W0126 22:57:14.938580 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda1ca3d4_2bae_4133_88c8_b67f74ebc6ab.slice/crio-904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571 WatchSource:0}: Error finding container 904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571: Status 404 returned error can't find the container with id 904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571 Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.959367 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.965396 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.969558 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.972433 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.065808 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d"] Jan 26 22:57:15 crc kubenswrapper[4793]: W0126 22:57:15.073540 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f086064_ee5a_47cf_bf97_fc2a423d8c33.slice/crio-d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba WatchSource:0}: Error finding container d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba: Status 404 returned error can't find the container with id d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.080378 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.081760 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcmmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-2jn8d_openstack-operators(6e0878fd-53dc-4048-9267-2520c5919067): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.083307 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podUID="6e0878fd-53dc-4048-9267-2520c5919067" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.091424 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.138349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.138474 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.138643 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.138736 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.139174 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:16.138733667 +0000 UTC m=+1051.127505189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.139223 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:16.13920913 +0000 UTC m=+1051.127980652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.239098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.239474 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.239589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:17.239560354 +0000 UTC m=+1052.228331866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.254893 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.279598 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.281856 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgnn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vzmsv_openstack-operators(d4c8a5bf-4e12-46b4-9d2a-1a098416e90a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.282210 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l5gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-sgcrb_openstack-operators(6e38b64f-cd6a-4cac-a125-be388bb0dc78): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.283345 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podUID="6e38b64f-cd6a-4cac-a125-be388bb0dc78" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.283390 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podUID="d4c8a5bf-4e12-46b4-9d2a-1a098416e90a" Jan 26 22:57:15 crc kubenswrapper[4793]: W0126 22:57:15.318437 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06401d29_ca17_43df_865a_26523cf84f67.slice/crio-9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e WatchSource:0}: Error finding container 9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e: Status 404 returned error can't find the container with id 9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.323263 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.332517 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.354212 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8cxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-rrmcg_openstack-operators(06401d29-ca17-43df-865a-26523cf84f67): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.355969 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podUID="06401d29-ca17-43df-865a-26523cf84f67" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.382533 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.421501 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz8k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-j292m_openstack-operators(65c6d17b-b112-421c-a686-ce1601f91181): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.422616 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podUID="65c6d17b-b112-421c-a686-ce1601f91181" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.442153 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r"] Jan 26 22:57:15 crc kubenswrapper[4793]: W0126 22:57:15.504920 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e46999_98e6_40a6_99a2_f4c01dad0f81.slice/crio-7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102 WatchSource:0}: Error finding container 7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102: Status 404 returned error can't find the container with id 7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102 Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.710348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" event={"ID":"4f086064-ee5a-47cf-bf97-fc2a423d8c33","Type":"ContainerStarted","Data":"d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.712603 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" event={"ID":"22d5cae5-26fb-4a47-97ac-78ba7120d29c","Type":"ContainerStarted","Data":"aad8872fe9b6f6e54005edae2bb793fe822303c6a00529fb0303108932a5990e"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.715540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" event={"ID":"6e0878fd-53dc-4048-9267-2520c5919067","Type":"ContainerStarted","Data":"912f0e8341d91ea6b380c8c5da5824c20630ef96d17f00e7e297cd969c60a67c"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.717458 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podUID="6e0878fd-53dc-4048-9267-2520c5919067" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.718176 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" event={"ID":"18b41715-6c23-4821-b0da-1c17b5010375","Type":"ContainerStarted","Data":"fb8448305486298a4f9d180cc5253e59129dbd2fb0eb81d07e83eb4ed1a23555"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.720810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" event={"ID":"06401d29-ca17-43df-865a-26523cf84f67","Type":"ContainerStarted","Data":"9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.723715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" event={"ID":"65c6d17b-b112-421c-a686-ce1601f91181","Type":"ContainerStarted","Data":"9ccb72589df362943a49aebbb3d7dd9e7d7570991846123d354b80abb9c7f1e0"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.724943 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podUID="06401d29-ca17-43df-865a-26523cf84f67" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.726748 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" event={"ID":"c6fcff5b-7bdb-4607-bd4d-389c863ddba6","Type":"ContainerStarted","Data":"4281698cf49ced2dc828bc1eb747adadc844a720239fabf8cda8679ece77b04a"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.726939 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podUID="65c6d17b-b112-421c-a686-ce1601f91181" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.729935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" event={"ID":"b934b85e-14a0-4ad2-bdc0-82280eb346a9","Type":"ContainerStarted","Data":"eacfa9d0c65e16f680ab10bd9838468514e0c1def06cbf29f59bd743f14309aa"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.734286 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" event={"ID":"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795","Type":"ContainerStarted","Data":"08483100df0e7db2d370aff8f79acfa633ee461ac6bb956b00cd0c5e5ec602a1"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.743532 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" event={"ID":"d0e46999-98e6-40a6-99a2-f4c01dad0f81","Type":"ContainerStarted","Data":"7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.748064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" event={"ID":"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab","Type":"ContainerStarted","Data":"904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.752148 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" event={"ID":"7968980a-764c-4cd2-b77f-ed33fffbd294","Type":"ContainerStarted","Data":"9d844a09321bb3df6b7dcd6f224c5e14d22b6f10f0e4177663e8cbe590495724"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.754642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" event={"ID":"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a","Type":"ContainerStarted","Data":"2eaa3af567b72422d79f4fe366a3c7eded6a6b9de3cafe4f58d4e8e759941159"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.762630 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podUID="d4c8a5bf-4e12-46b4-9d2a-1a098416e90a" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.770624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.770816 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.771675 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:17.771645835 +0000 UTC m=+1052.760417347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerStarted","Data":"a5c867b9927299be310ccbc0cb616dae9ad68f5d7f00ed7bfa18b82b1aeee089"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801447 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" event={"ID":"42a387a4-faad-41fe-bfa9-0f600a06e6e0","Type":"ContainerStarted","Data":"7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" event={"ID":"6e38b64f-cd6a-4cac-a125-be388bb0dc78","Type":"ContainerStarted","Data":"551b85009d2cd9d2b417dfef21561c43143061397f7429c4ffe6e9320d310629"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801483 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" event={"ID":"f15117fc-93f9-498b-b831-e87094aa991e","Type":"ContainerStarted","Data":"4817eef049e4621cc079b9edc2d5b6c0576f114e00e73638c7a2cff89397d768"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.817402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podUID="6e38b64f-cd6a-4cac-a125-be388bb0dc78" Jan 26 22:57:16 crc kubenswrapper[4793]: I0126 22:57:16.184931 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:16 crc kubenswrapper[4793]: I0126 22:57:16.185043 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185252 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185317 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:18.185295775 +0000 UTC m=+1053.174067287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185713 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185741 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:18.185731747 +0000 UTC m=+1053.174503259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.821624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podUID="65c6d17b-b112-421c-a686-ce1601f91181" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.822002 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podUID="6e38b64f-cd6a-4cac-a125-be388bb0dc78" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.822053 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podUID="06401d29-ca17-43df-865a-26523cf84f67" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.822093 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podUID="d4c8a5bf-4e12-46b4-9d2a-1a098416e90a" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.825397 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podUID="6e0878fd-53dc-4048-9267-2520c5919067" Jan 26 22:57:17 crc kubenswrapper[4793]: I0126 22:57:17.319731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.319995 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.320053 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:21.320033146 +0000 UTC m=+1056.308804658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:17 crc kubenswrapper[4793]: I0126 22:57:17.832349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.832538 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.832915 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:21.832897389 +0000 UTC m=+1056.821668891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: I0126 22:57:18.249057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:18 crc kubenswrapper[4793]: I0126 22:57:18.249242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.249557 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.249644 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:22.249619385 +0000 UTC m=+1057.238390887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.250269 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.250334 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:22.250298284 +0000 UTC m=+1057.239069796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: I0126 22:57:21.409868 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.410278 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.410589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:29.410543072 +0000 UTC m=+1064.399314624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: I0126 22:57:21.919943 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.920437 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.920483 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:29.920466443 +0000 UTC m=+1064.909237955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: I0126 22:57:22.327350 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:22 crc kubenswrapper[4793]: I0126 22:57:22.327577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327615 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327753 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:30.327715614 +0000 UTC m=+1065.316487166 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327832 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327930 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:30.32789858 +0000 UTC m=+1065.316670132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.847353 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591" Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.848645 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68tjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-655bf9cfbb-dgk2z_openstack-operators(58fa089e-6ccd-4521-88ff-e65e6928b738): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.850086 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" podUID="58fa089e-6ccd-4521-88ff-e65e6928b738" Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.909049 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" podUID="58fa089e-6ccd-4521-88ff-e65e6928b738" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.343920 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:7c854adeefcf36df519c8d48e1cfc8ca8959c5023083f2b923ba50134c4dfa22" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.344117 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:7c854adeefcf36df519c8d48e1cfc8ca8959c5023083f2b923ba50134c4dfa22,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2trgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-756f86fc74-mvr7p_openstack-operators(4f086064-ee5a-47cf-bf97-fc2a423d8c33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.345285 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" podUID="4f086064-ee5a-47cf-bf97-fc2a423d8c33" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.915441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:7c854adeefcf36df519c8d48e1cfc8ca8959c5023083f2b923ba50134c4dfa22\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" podUID="4f086064-ee5a-47cf-bf97-fc2a423d8c33" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.448089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.457520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.725337 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.836439 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.836645 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrf8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-bp8k8_openstack-operators(7968980a-764c-4cd2-b77f-ed33fffbd294): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.838570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" podUID="7968980a-764c-4cd2-b77f-ed33fffbd294" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.923602 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" podUID="7968980a-764c-4cd2-b77f-ed33fffbd294" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.955601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.955776 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.955873 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:45.955855415 +0000 UTC m=+1080.944626927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:30 crc kubenswrapper[4793]: I0126 22:57:30.361025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:30 crc kubenswrapper[4793]: I0126 22:57:30.361121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.361358 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.361423 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:46.361406269 +0000 UTC m=+1081.350177781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.364788 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.364986 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psc2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-vxk8h_openstack-operators(b934b85e-14a0-4ad2-bdc0-82280eb346a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.366479 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" podUID="b934b85e-14a0-4ad2-bdc0-82280eb346a9" Jan 26 22:57:30 crc kubenswrapper[4793]: I0126 22:57:30.369407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.926203 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" podUID="b934b85e-14a0-4ad2-bdc0-82280eb346a9" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.977985 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:53ed5a94d4b8864152053b8d3c3ac0765d178c4db033fefe0ca8dab887d74922" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.978236 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:53ed5a94d4b8864152053b8d3c3ac0765d178c4db033fefe0ca8dab887d74922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pq7mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-cb484894b-qxf7r_openstack-operators(d0e46999-98e6-40a6-99a2-f4c01dad0f81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.979445 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" podUID="d0e46999-98e6-40a6-99a2-f4c01dad0f81" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.472641 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.473009 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vg46x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-67dd55ff59-cpwtx_openstack-operators(f15117fc-93f9-498b-b831-e87094aa991e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.475700 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" podUID="f15117fc-93f9-498b-b831-e87094aa991e" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.943534 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39\\\"\"" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" podUID="f15117fc-93f9-498b-b831-e87094aa991e" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.944500 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:53ed5a94d4b8864152053b8d3c3ac0765d178c4db033fefe0ca8dab887d74922\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" podUID="d0e46999-98e6-40a6-99a2-f4c01dad0f81" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.297776 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.297978 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-vkwjg_openstack-operators(42a387a4-faad-41fe-bfa9-0f600a06e6e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.299148 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" podUID="42a387a4-faad-41fe-bfa9-0f600a06e6e0" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.704952 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.705024 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.705180 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5b4fc7b894-55w8b_openstack-operators(c130e1aa-1f05-45ff-8364-714b79fa7282): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.706408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.958007 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" podUID="42a387a4-faad-41fe-bfa9-0f600a06e6e0" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.958011 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.716415 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.716651 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rm9g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-g2r87_openstack-operators(22d5cae5-26fb-4a47-97ac-78ba7120d29c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.718023 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" podUID="22d5cae5-26fb-4a47-97ac-78ba7120d29c" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.962534 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" podUID="22d5cae5-26fb-4a47-97ac-78ba7120d29c" Jan 26 22:57:36 crc kubenswrapper[4793]: I0126 22:57:36.605508 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h"] Jan 26 22:57:36 crc kubenswrapper[4793]: W0126 22:57:36.737363 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb8ed64e_38aa_4e1f_be80_29be415125fd.slice/crio-e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb WatchSource:0}: Error finding container e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb: Status 404 returned error can't find the container with id e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb Jan 26 22:57:36 crc kubenswrapper[4793]: I0126 22:57:36.984858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" event={"ID":"eb8ed64e-38aa-4e1f-be80-29be415125fd","Type":"ContainerStarted","Data":"e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.003408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" event={"ID":"4190249d-f34c-4b1a-a4da-038f7a806fc6","Type":"ContainerStarted","Data":"d23dd0621b5a7068dd3695410d70acb2195903e86c93bef9678f510b2ec7dbc6"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.004084 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.009871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" event={"ID":"c6fcff5b-7bdb-4607-bd4d-389c863ddba6","Type":"ContainerStarted","Data":"10bbe845aa6d348cff8111285ba4412d36ca4bd1d704e3870f6df3863ebc6a52"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.010037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.014352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" event={"ID":"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a","Type":"ContainerStarted","Data":"0e3a573d5ec3ae9ac1e3d451a23e201138e72c5bb7411b28d73189d668ce340c"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.015618 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" event={"ID":"71c7d600-2dac-4e19-a33f-0311a8342774","Type":"ContainerStarted","Data":"1eddbe7ec4299040e28a703a41990d68c421b51be0269f8d09c861c0671c7f2d"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.015740 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.020769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" event={"ID":"65c6d17b-b112-421c-a686-ce1601f91181","Type":"ContainerStarted","Data":"7dda179822c0c4ef2ccda6fd73361aac7b93007471dec0ebd04560c4d950ff43"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.021569 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.022084 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" podStartSLOduration=5.404335409 podStartE2EDuration="25.022061298s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.572228665 +0000 UTC m=+1049.561000177" lastFinishedPulling="2026-01-26 22:57:34.189954554 +0000 UTC m=+1069.178726066" observedRunningTime="2026-01-26 22:57:38.020406172 +0000 UTC m=+1073.009177684" watchObservedRunningTime="2026-01-26 22:57:38.022061298 +0000 UTC m=+1073.010832810" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.032946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" event={"ID":"18b41715-6c23-4821-b0da-1c17b5010375","Type":"ContainerStarted","Data":"67bbcfda85cf023096d096cd8c4e64a330e947033fe3d340153753914d282ffb"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.033123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.034735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" event={"ID":"6e38b64f-cd6a-4cac-a125-be388bb0dc78","Type":"ContainerStarted","Data":"66975fc3b9768a1cf7af7d52153d7843d6e2f903f97de2d5ace426be44c18384"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.035095 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.036617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" event={"ID":"06401d29-ca17-43df-865a-26523cf84f67","Type":"ContainerStarted","Data":"605f9fef188bf769f8e8a6e5dccf343a2159623d7f6a69db5e8d6bf4994a49f8"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.036743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.038768 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" event={"ID":"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795","Type":"ContainerStarted","Data":"c8177ef8bd105ca9f211499c072ed3ddb03a89fad05c91f07dfd61c806710a13"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.038841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.040000 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" event={"ID":"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab","Type":"ContainerStarted","Data":"1e5082ac017aa7c4c17769c8fd59a4f5b49438d287605a4e025cb0dc2b40c85d"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.040138 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.044216 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" event={"ID":"6e0878fd-53dc-4048-9267-2520c5919067","Type":"ContainerStarted","Data":"20de12f697f8bd17447479eedf5eabbdd4afb8e4511868edcaf9105855390cb1"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.044470 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.047734 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" podStartSLOduration=5.841385863 podStartE2EDuration="25.047714815s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.984358392 +0000 UTC m=+1049.973129904" lastFinishedPulling="2026-01-26 22:57:34.190687334 +0000 UTC m=+1069.179458856" observedRunningTime="2026-01-26 22:57:38.041345257 +0000 UTC m=+1073.030116769" watchObservedRunningTime="2026-01-26 22:57:38.047714815 +0000 UTC m=+1073.036486327" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.061477 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podStartSLOduration=2.486638687 podStartE2EDuration="24.061449139s" podCreationTimestamp="2026-01-26 22:57:14 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.281717353 +0000 UTC m=+1050.270488865" lastFinishedPulling="2026-01-26 22:57:36.856527805 +0000 UTC m=+1071.845299317" observedRunningTime="2026-01-26 22:57:38.056561233 +0000 UTC m=+1073.045332745" watchObservedRunningTime="2026-01-26 22:57:38.061449139 +0000 UTC m=+1073.050220651" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.074506 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" podStartSLOduration=5.354683962 podStartE2EDuration="25.074484864s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.469504244 +0000 UTC m=+1049.458275756" lastFinishedPulling="2026-01-26 22:57:34.189305136 +0000 UTC m=+1069.178076658" observedRunningTime="2026-01-26 22:57:38.07041556 +0000 UTC m=+1073.059187072" watchObservedRunningTime="2026-01-26 22:57:38.074484864 +0000 UTC m=+1073.063256376" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.098737 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podStartSLOduration=3.448130231 podStartE2EDuration="25.098712391s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.08158386 +0000 UTC m=+1050.070355372" lastFinishedPulling="2026-01-26 22:57:36.73216602 +0000 UTC m=+1071.720937532" observedRunningTime="2026-01-26 22:57:38.096647013 +0000 UTC m=+1073.085418535" watchObservedRunningTime="2026-01-26 22:57:38.098712391 +0000 UTC m=+1073.087483903" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.124115 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" podStartSLOduration=5.958200998 podStartE2EDuration="25.12409522s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.02398878 +0000 UTC m=+1050.012760292" lastFinishedPulling="2026-01-26 22:57:34.189883002 +0000 UTC m=+1069.178654514" observedRunningTime="2026-01-26 22:57:38.120692105 +0000 UTC m=+1073.109463617" watchObservedRunningTime="2026-01-26 22:57:38.12409522 +0000 UTC m=+1073.112866732" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.153107 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podStartSLOduration=4.327154996 podStartE2EDuration="25.15308552s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.354005743 +0000 UTC m=+1050.342777255" lastFinishedPulling="2026-01-26 22:57:36.179936227 +0000 UTC m=+1071.168707779" observedRunningTime="2026-01-26 22:57:38.145528709 +0000 UTC m=+1073.134300221" watchObservedRunningTime="2026-01-26 22:57:38.15308552 +0000 UTC m=+1073.141857032" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.164435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podStartSLOduration=3.612857504 podStartE2EDuration="25.164415857s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.282120454 +0000 UTC m=+1050.270891966" lastFinishedPulling="2026-01-26 22:57:36.833678807 +0000 UTC m=+1071.822450319" observedRunningTime="2026-01-26 22:57:38.16165837 +0000 UTC m=+1073.150429882" watchObservedRunningTime="2026-01-26 22:57:38.164415857 +0000 UTC m=+1073.153187369" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.185013 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" podStartSLOduration=5.85377806 podStartE2EDuration="25.184991432s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.857533638 +0000 UTC m=+1049.846305150" lastFinishedPulling="2026-01-26 22:57:34.18874701 +0000 UTC m=+1069.177518522" observedRunningTime="2026-01-26 22:57:38.182451201 +0000 UTC m=+1073.171222723" watchObservedRunningTime="2026-01-26 22:57:38.184991432 +0000 UTC m=+1073.173762934" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.205622 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" podStartSLOduration=5.979867914 podStartE2EDuration="25.205604808s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.964869018 +0000 UTC m=+1049.953640530" lastFinishedPulling="2026-01-26 22:57:34.190605912 +0000 UTC m=+1069.179377424" observedRunningTime="2026-01-26 22:57:38.204051134 +0000 UTC m=+1073.192822636" watchObservedRunningTime="2026-01-26 22:57:38.205604808 +0000 UTC m=+1073.194376320" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.230835 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podStartSLOduration=4.508481274 podStartE2EDuration="25.230808772s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.421334965 +0000 UTC m=+1050.410106477" lastFinishedPulling="2026-01-26 22:57:36.143662423 +0000 UTC m=+1071.132433975" observedRunningTime="2026-01-26 22:57:38.226700918 +0000 UTC m=+1073.215472420" watchObservedRunningTime="2026-01-26 22:57:38.230808772 +0000 UTC m=+1073.219580284" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.057611 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" event={"ID":"58fa089e-6ccd-4521-88ff-e65e6928b738","Type":"ContainerStarted","Data":"533cc1895afce8ac215f7b7bf08d37266ef42da014f243d2fd35892ebf33f9ab"} Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.058227 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.060971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" event={"ID":"eb8ed64e-38aa-4e1f-be80-29be415125fd","Type":"ContainerStarted","Data":"d4d96d7eb5c447de8569cc67ca667162bb497037174de0057d8fcd61148b3c83"} Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.061376 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.074548 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" podStartSLOduration=2.189450854 podStartE2EDuration="27.074527788s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.448517837 +0000 UTC m=+1049.437289349" lastFinishedPulling="2026-01-26 22:57:39.333594781 +0000 UTC m=+1074.322366283" observedRunningTime="2026-01-26 22:57:40.071273647 +0000 UTC m=+1075.060045169" watchObservedRunningTime="2026-01-26 22:57:40.074527788 +0000 UTC m=+1075.063299310" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.102576 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" podStartSLOduration=24.511396936 podStartE2EDuration="27.10254176s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:36.75007203 +0000 UTC m=+1071.738843532" lastFinishedPulling="2026-01-26 22:57:39.341216844 +0000 UTC m=+1074.329988356" observedRunningTime="2026-01-26 22:57:40.100051971 +0000 UTC m=+1075.088823573" watchObservedRunningTime="2026-01-26 22:57:40.10254176 +0000 UTC m=+1075.091313312" Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.076518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" event={"ID":"4f086064-ee5a-47cf-bf97-fc2a423d8c33","Type":"ContainerStarted","Data":"c09284cf0b42ba68a80615939cd4ef1c3592469ee39229e0c87aa7e7b9187990"} Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.078314 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.100640 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" podStartSLOduration=2.994285356 podStartE2EDuration="29.100617019s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.078939166 +0000 UTC m=+1050.067710678" lastFinishedPulling="2026-01-26 22:57:41.185270829 +0000 UTC m=+1076.174042341" observedRunningTime="2026-01-26 22:57:42.098681215 +0000 UTC m=+1077.087452727" watchObservedRunningTime="2026-01-26 22:57:42.100617019 +0000 UTC m=+1077.089388531" Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.764134 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.088987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" event={"ID":"b934b85e-14a0-4ad2-bdc0-82280eb346a9","Type":"ContainerStarted","Data":"95a0a1d32f2c55c9f15ac7a52692f4faa1f6c112e1766a57aca010c4df81ef34"} Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.089456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.116337 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" podStartSLOduration=2.958469386 podStartE2EDuration="30.116304765s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.013875097 +0000 UTC m=+1050.002646609" lastFinishedPulling="2026-01-26 22:57:42.171710486 +0000 UTC m=+1077.160481988" observedRunningTime="2026-01-26 22:57:43.113460655 +0000 UTC m=+1078.102232177" watchObservedRunningTime="2026-01-26 22:57:43.116304765 +0000 UTC m=+1078.105076307" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.684516 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.755080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.821921 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.927574 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.965983 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.076301 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.101800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerStarted","Data":"27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76"} Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.102731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.107167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" event={"ID":"d0e46999-98e6-40a6-99a2-f4c01dad0f81","Type":"ContainerStarted","Data":"210ee7d659ac2088be642e7c257ab164b002b8a31e1d643f3a3d99dbe380be20"} Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.107532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.130154 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podStartSLOduration=2.319216941 podStartE2EDuration="31.130135068s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.076510638 +0000 UTC m=+1050.065282140" lastFinishedPulling="2026-01-26 22:57:43.887428755 +0000 UTC m=+1078.876200267" observedRunningTime="2026-01-26 22:57:44.122126694 +0000 UTC m=+1079.110898216" watchObservedRunningTime="2026-01-26 22:57:44.130135068 +0000 UTC m=+1079.118906580" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.142662 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" podStartSLOduration=2.885131907 podStartE2EDuration="31.142646138s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.527104641 +0000 UTC m=+1050.515876153" lastFinishedPulling="2026-01-26 22:57:43.784618862 +0000 UTC m=+1078.773390384" observedRunningTime="2026-01-26 22:57:44.137041061 +0000 UTC m=+1079.125812583" watchObservedRunningTime="2026-01-26 22:57:44.142646138 +0000 UTC m=+1079.131417650" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.312523 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.374149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.489846 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.502952 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:45 crc kubenswrapper[4793]: I0126 22:57:45.116161 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" event={"ID":"7968980a-764c-4cd2-b77f-ed33fffbd294","Type":"ContainerStarted","Data":"a9d60752a68cd5b97e4b08ed3f0ccb384d441f3a237acd514d2caf36b8252743"} Jan 26 22:57:45 crc kubenswrapper[4793]: I0126 22:57:45.116942 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:45 crc kubenswrapper[4793]: I0126 22:57:45.142681 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" podStartSLOduration=3.1013690289999998 podStartE2EDuration="32.142652315s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.265344885 +0000 UTC m=+1050.254116397" lastFinishedPulling="2026-01-26 22:57:44.306628171 +0000 UTC m=+1079.295399683" observedRunningTime="2026-01-26 22:57:45.138033866 +0000 UTC m=+1080.126805418" watchObservedRunningTime="2026-01-26 22:57:45.142652315 +0000 UTC m=+1080.131423827" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.026618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.041035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.052525 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q4r9v" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.061042 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.133946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" event={"ID":"f15117fc-93f9-498b-b831-e87094aa991e","Type":"ContainerStarted","Data":"ce3cc98315a3fdff0be513df123c307ce44b07da8527ed704181d7fe7ffe2863"} Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.134731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.157554 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" podStartSLOduration=2.766079468 podStartE2EDuration="33.157529697s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.847304132 +0000 UTC m=+1049.836075644" lastFinishedPulling="2026-01-26 22:57:45.238754361 +0000 UTC m=+1080.227525873" observedRunningTime="2026-01-26 22:57:46.15299371 +0000 UTC m=+1081.141765222" watchObservedRunningTime="2026-01-26 22:57:46.157529697 +0000 UTC m=+1081.146301249" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.432892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.439096 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.503857 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lmwx4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.511097 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.591026 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4"] Jan 26 22:57:46 crc kubenswrapper[4793]: W0126 22:57:46.598674 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ca275a_4c50_479e_9ed5_4e03dda309cf.slice/crio-5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4 WatchSource:0}: Error finding container 5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4: Status 404 returned error can't find the container with id 5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4 Jan 26 22:57:46 crc kubenswrapper[4793]: W0126 22:57:46.813057 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1cd2fec_ba5b_4984_90c5_565df4ef5cd1.slice/crio-afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d WatchSource:0}: Error finding container afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d: Status 404 returned error can't find the container with id afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.817269 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq"] Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.146750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" event={"ID":"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1","Type":"ContainerStarted","Data":"d6e30b4d05761b2898142b60a945bf5293d9605373383b754290c5a6a17d1427"} Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.147393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.147410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" event={"ID":"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1","Type":"ContainerStarted","Data":"afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d"} Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.148608 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" event={"ID":"86ca275a-4c50-479e-9ed5-4e03dda309cf","Type":"ContainerStarted","Data":"5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4"} Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.188623 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" podStartSLOduration=33.188569452 podStartE2EDuration="33.188569452s" podCreationTimestamp="2026-01-26 22:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:57:47.183973663 +0000 UTC m=+1082.172745205" watchObservedRunningTime="2026-01-26 22:57:47.188569452 +0000 UTC m=+1082.177340974" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.161713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" event={"ID":"22d5cae5-26fb-4a47-97ac-78ba7120d29c","Type":"ContainerStarted","Data":"6603f6dcec8adfa1e758ca8200f60813a477761f23272ac80eb0016d47738eec"} Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.162414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.164631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" event={"ID":"42a387a4-faad-41fe-bfa9-0f600a06e6e0","Type":"ContainerStarted","Data":"989a1d347dded3eb96f4bfc83b03b39fcbc70258d523f7299cc728d9cda48121"} Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.164944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.189403 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" podStartSLOduration=3.005734047 podStartE2EDuration="35.189382761s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.019367961 +0000 UTC m=+1050.008139473" lastFinishedPulling="2026-01-26 22:57:47.203016665 +0000 UTC m=+1082.191788187" observedRunningTime="2026-01-26 22:57:48.187140449 +0000 UTC m=+1083.175912001" watchObservedRunningTime="2026-01-26 22:57:48.189382761 +0000 UTC m=+1083.178154273" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.214609 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" podStartSLOduration=2.854735186 podStartE2EDuration="35.214576535s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.840833671 +0000 UTC m=+1049.829605183" lastFinishedPulling="2026-01-26 22:57:47.20067502 +0000 UTC m=+1082.189446532" observedRunningTime="2026-01-26 22:57:48.209336749 +0000 UTC m=+1083.198108261" watchObservedRunningTime="2026-01-26 22:57:48.214576535 +0000 UTC m=+1083.203348047" Jan 26 22:57:49 crc kubenswrapper[4793]: I0126 22:57:49.732869 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.725938 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.782513 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.807115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.931237 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.092433 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.125227 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.139288 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.340577 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.654884 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:56 crc kubenswrapper[4793]: I0126 22:57:56.523058 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:59 crc kubenswrapper[4793]: E0126 22:57:59.394104 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22" Jan 26 22:57:59 crc kubenswrapper[4793]: E0126 22:57:59.395491 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pwkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4_openstack-operators(86ca275a-4c50-479e-9ed5-4e03dda309cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:59 crc kubenswrapper[4793]: E0126 22:57:59.396910 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" podUID="86ca275a-4c50-479e-9ed5-4e03dda309cf" Jan 26 22:58:00 crc kubenswrapper[4793]: E0126 22:58:00.261287 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" podUID="86ca275a-4c50-479e-9ed5-4e03dda309cf" Jan 26 22:58:16 crc kubenswrapper[4793]: I0126 22:58:16.388530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" event={"ID":"86ca275a-4c50-479e-9ed5-4e03dda309cf","Type":"ContainerStarted","Data":"eb3ac00bcfde1ae7938c10fab76f46803fbd127e699542fd66b42789e5d4c882"} Jan 26 22:58:16 crc kubenswrapper[4793]: I0126 22:58:16.389828 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:58:16 crc kubenswrapper[4793]: I0126 22:58:16.425538 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" podStartSLOduration=34.577739177 podStartE2EDuration="1m3.425511135s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:46.601859405 +0000 UTC m=+1081.590630927" lastFinishedPulling="2026-01-26 22:58:15.449631333 +0000 UTC m=+1110.438402885" observedRunningTime="2026-01-26 22:58:16.423902891 +0000 UTC m=+1111.412674403" watchObservedRunningTime="2026-01-26 22:58:16.425511135 +0000 UTC m=+1111.414282647" Jan 26 22:58:18 crc kubenswrapper[4793]: I0126 22:58:18.323016 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:58:18 crc kubenswrapper[4793]: I0126 22:58:18.323563 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:58:26 crc kubenswrapper[4793]: I0126 22:58:26.070786 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.230128 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.231993 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.235023 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.235089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.236374 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.236550 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-pnjfr" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.236702 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.237187 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.240421 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.256740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.375889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-server-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.375952 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhc4\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-kube-api-access-cqhc4\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376043 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376166 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376393 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/733142d4-49c6-4e25-a160-78aa1118d296-pod-info\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376508 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/733142d4-49c6-4e25-a160-78aa1118d296-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/733142d4-49c6-4e25-a160-78aa1118d296-pod-info\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478394 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478422 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/733142d4-49c6-4e25-a160-78aa1118d296-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478443 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-server-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478498 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhc4\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-kube-api-access-cqhc4\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478518 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478566 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479387 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479722 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479784 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-server-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.483134 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.483175 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c84253ff34d7709066a0f64547ac926619bbfc7fcf55095c6f0d9139388f0b03/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.486989 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/733142d4-49c6-4e25-a160-78aa1118d296-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.487127 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/733142d4-49c6-4e25-a160-78aa1118d296-pod-info\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.487175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.513446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.513874 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhc4\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-kube-api-access-cqhc4\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.535334 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.537587 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.542977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543065 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543282 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543347 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-lmfsk" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543368 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.548908 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.562316 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690837 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690913 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691053 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691112 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691217 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwckb\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-kube-api-access-kwckb\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.795938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796112 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796168 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796226 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwckb\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-kube-api-access-kwckb\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.804415 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.805346 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.806193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.806294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.806673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.808760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.809448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.809845 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.811056 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.822581 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823002 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823320 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823567 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-752gh" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823772 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.824507 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.827047 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.827101 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1653d9c4aeb4b7737476ff27069b6a90b3652e944ebf43c1da6f7c30c8acb21/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.840663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwckb\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-kube-api-access-kwckb\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.876730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.893543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.897912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.897956 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.897998 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.898049 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.898249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.898361 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.900520 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.900611 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.900857 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzdq\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-kube-api-access-kqzdq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002527 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqzdq\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-kube-api-access-kqzdq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002551 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002578 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002628 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.004355 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.008774 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.008827 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.009692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.010013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.010075 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.011766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.012086 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.012117 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1b65e6087beb8f396d5bde46e8010e1ab98c55dd5e3fbf5a99be8a053675d77/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.023132 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.024507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.036533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqzdq\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-kube-api-access-kqzdq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.036802 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-9nz9x" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.037149 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.039146 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.039421 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.044155 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.067131 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.068121 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.087627 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.105624 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107452 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrmt\" (UniqueName: \"kubernetes.io/projected/9cf954a6-b71a-49f9-93c5-618d4e944159-kube-api-access-cxrmt\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.108082 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.139451 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.145022 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.147112 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.155974 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156364 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-server-dockercfg-9s6l9" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-default-user" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156771 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156966 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.174376 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6n7w\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-kube-api-access-l6n7w\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70c43064-b9c2-4f01-9bc2-5431c5dca494-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210609 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210664 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210689 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210724 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210770 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210799 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210827 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxrmt\" (UniqueName: \"kubernetes.io/projected/9cf954a6-b71a-49f9-93c5-618d4e944159-kube-api-access-cxrmt\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70c43064-b9c2-4f01-9bc2-5431c5dca494-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.212286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.212720 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.213713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.214481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.225319 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.226728 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.226758 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/49d64d0e1d76b735b81103bcddd20a8a89d874a94e2a3556534488c056649f70/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.228541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.233445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxrmt\" (UniqueName: \"kubernetes.io/projected/9cf954a6-b71a-49f9-93c5-618d4e944159-kube-api-access-cxrmt\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.301051 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70c43064-b9c2-4f01-9bc2-5431c5dca494-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316324 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316358 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70c43064-b9c2-4f01-9bc2-5431c5dca494-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316438 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316524 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6n7w\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-kube-api-access-l6n7w\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.324277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.324566 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.324987 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.325318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.325991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.326297 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70c43064-b9c2-4f01-9bc2-5431c5dca494-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.329862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70c43064-b9c2-4f01-9bc2-5431c5dca494-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.331389 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.331446 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5b5053d94bc750fff8de7b0b157012f63097405e6337ff112a2f4d24dfa8b4d/globalmount\"" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.358078 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6n7w\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-kube-api-access-l6n7w\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.365316 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.381633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.435002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.485566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.497841 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.592793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerStarted","Data":"db624f0f3d3cb164ac2b7a1a8b687d140077b13e2c996d37dd1dc3311975ddd7"} Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.608491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerStarted","Data":"878a021024ca930bea18b6297f6a00be04621ac4fa1a27feb90ecbb735f4909d"} Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.653745 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerStarted","Data":"4817f3f121ed18c631ca49dc418d7fe8c4a8f3ab69cf8b3bca4b17b3a8d1a830"} Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.776109 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.780885 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.783329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-9hdb7" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.784105 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.792608 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.837634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsllm\" (UniqueName: \"kubernetes.io/projected/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kube-api-access-zsllm\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.837697 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-config-data\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.837763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kolla-config\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.939829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kolla-config\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.939956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsllm\" (UniqueName: \"kubernetes.io/projected/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kube-api-access-zsllm\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.939990 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-config-data\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.940971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-config-data\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.941407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kolla-config\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.962132 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsllm\" (UniqueName: \"kubernetes.io/projected/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kube-api-access-zsllm\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.996924 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.108359 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.128226 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: W0126 22:58:37.129606 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c43064_b9c2_4f01_9bc2_5431c5dca494.slice/crio-316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2 WatchSource:0}: Error finding container 316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2: Status 404 returned error can't find the container with id 316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2 Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.592079 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: W0126 22:58:37.602591 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd933c86b_bb42_4d94_9eb2_65888a5e95ab.slice/crio-53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50 WatchSource:0}: Error finding container 53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50: Status 404 returned error can't find the container with id 53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50 Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.665668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerStarted","Data":"ddbd86b057746cacd08a276158c06d9a8ef6d6941d330c26e761665ea5835474"} Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.668086 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerStarted","Data":"316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2"} Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.669585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"d933c86b-bb42-4d94-9eb2-65888a5e95ab","Type":"ContainerStarted","Data":"53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50"} Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.707588 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.714308 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.718215 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-xg6r2" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.718441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.721950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.729867 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.731380 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756652 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7r2m\" (UniqueName: \"kubernetes.io/projected/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kube-api-access-q7r2m\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756816 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756916 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858016 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858099 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858122 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858256 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7r2m\" (UniqueName: \"kubernetes.io/projected/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kube-api-access-q7r2m\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858303 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.859231 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.859667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.861028 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.864433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.868023 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.868070 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/85fb299fc637fb168ba164470fbe00a2809160fb168d25cec10801b57445b91c/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.870556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.877392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.878059 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7r2m\" (UniqueName: \"kubernetes.io/projected/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kube-api-access-q7r2m\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.907086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:38 crc kubenswrapper[4793]: I0126 22:58:38.063033 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:38 crc kubenswrapper[4793]: I0126 22:58:38.661505 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 26 22:58:38 crc kubenswrapper[4793]: W0126 22:58:38.697979 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc3daf2_84e7_4831_9eae_155632f1b0cd.slice/crio-136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0 WatchSource:0}: Error finding container 136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0: Status 404 returned error can't find the container with id 136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0 Jan 26 22:58:39 crc kubenswrapper[4793]: I0126 22:58:39.695535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerStarted","Data":"136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.322804 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.323709 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.791030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"d933c86b-bb42-4d94-9eb2-65888a5e95ab","Type":"ContainerStarted","Data":"2361625337e35c8eac0c41fe782b0be79cc75721f64f7d81c308cb9285165509"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.791479 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.792614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerStarted","Data":"fdda41f7414d46997cb7867b1c5ac174c160d966e32baf0ef8109fbb3b4ec313"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.794022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerStarted","Data":"53ed54adcffe723cc5c04ca3514e65945e26ee7b5398c857f9dbce255f565dfd"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.811525 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.9759169989999998 podStartE2EDuration="12.811502597s" podCreationTimestamp="2026-01-26 22:58:36 +0000 UTC" firstStartedPulling="2026-01-26 22:58:37.607350149 +0000 UTC m=+1132.596121661" lastFinishedPulling="2026-01-26 22:58:48.442935727 +0000 UTC m=+1143.431707259" observedRunningTime="2026-01-26 22:58:48.809130081 +0000 UTC m=+1143.797901603" watchObservedRunningTime="2026-01-26 22:58:48.811502597 +0000 UTC m=+1143.800274109" Jan 26 22:58:49 crc kubenswrapper[4793]: I0126 22:58:49.805286 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerStarted","Data":"c0e1c08938d407156158ed4ade62f90fafca0be9a3c8b26ccbdd350915d858cd"} Jan 26 22:58:50 crc kubenswrapper[4793]: I0126 22:58:50.815858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerStarted","Data":"dc3e39837d8e86561cf472a2f6f2745af62ce503d34f942a6ce713fb061ed1f7"} Jan 26 22:58:50 crc kubenswrapper[4793]: I0126 22:58:50.818335 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerStarted","Data":"df044600c026e05e6cd9488f72ff0a79fad01760d11999e101be3902e689d983"} Jan 26 22:58:50 crc kubenswrapper[4793]: I0126 22:58:50.820859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerStarted","Data":"ad58aa032b54033c90e8ecf954d6ad00a7b0dcbeedc6afe2b86ad4ad1a96e8ad"} Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.848662 4793 generic.go:334] "Generic (PLEG): container finished" podID="1fc3daf2-84e7-4831-9eae-155632f1b0cd" containerID="fdda41f7414d46997cb7867b1c5ac174c160d966e32baf0ef8109fbb3b4ec313" exitCode=0 Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.848838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerDied","Data":"fdda41f7414d46997cb7867b1c5ac174c160d966e32baf0ef8109fbb3b4ec313"} Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.851942 4793 generic.go:334] "Generic (PLEG): container finished" podID="9cf954a6-b71a-49f9-93c5-618d4e944159" containerID="53ed54adcffe723cc5c04ca3514e65945e26ee7b5398c857f9dbce255f565dfd" exitCode=0 Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.851985 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerDied","Data":"53ed54adcffe723cc5c04ca3514e65945e26ee7b5398c857f9dbce255f565dfd"} Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.863856 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerStarted","Data":"883c50ad0901303930f73ed00a7cb86060246fc7d8c8bd00c8c17ae567cf99ab"} Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.866837 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerStarted","Data":"188bcef44c0b27f096f082d8b315d54f7b30cc57cf17a49ab9eabde5c537cb12"} Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.907015 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=10.168277259 podStartE2EDuration="19.906990903s" podCreationTimestamp="2026-01-26 22:58:36 +0000 UTC" firstStartedPulling="2026-01-26 22:58:38.702065453 +0000 UTC m=+1133.690836965" lastFinishedPulling="2026-01-26 22:58:48.440779077 +0000 UTC m=+1143.429550609" observedRunningTime="2026-01-26 22:58:55.901277083 +0000 UTC m=+1150.890048615" watchObservedRunningTime="2026-01-26 22:58:55.906990903 +0000 UTC m=+1150.895762425" Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.934848 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=10.458058645 podStartE2EDuration="21.934826201s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:37.010196801 +0000 UTC m=+1131.998968313" lastFinishedPulling="2026-01-26 22:58:48.486964347 +0000 UTC m=+1143.475735869" observedRunningTime="2026-01-26 22:58:55.923433432 +0000 UTC m=+1150.912204964" watchObservedRunningTime="2026-01-26 22:58:55.934826201 +0000 UTC m=+1150.923597713" Jan 26 22:58:56 crc kubenswrapper[4793]: I0126 22:58:56.382347 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:56 crc kubenswrapper[4793]: I0126 22:58:56.382384 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:57 crc kubenswrapper[4793]: I0126 22:58:57.110382 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:58 crc kubenswrapper[4793]: I0126 22:58:58.063591 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:58 crc kubenswrapper[4793]: I0126 22:58:58.063787 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.139538 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.238140 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.484511 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.567864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.128695 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.132773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.136789 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.144770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.199189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.199255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.300892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.301320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.302413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.335649 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.458985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.949382 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:05 crc kubenswrapper[4793]: W0126 22:59:05.952565 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b57ce0_a45c_41a2_a54b_68cd4bb0996f.slice/crio-b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b WatchSource:0}: Error finding container b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b: Status 404 returned error can't find the container with id b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.477979 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.479469 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.513071 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.521667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.521824 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.603887 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.605710 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.608782 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.612638 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.625117 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.628410 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.628522 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.628700 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.630128 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.668583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.731374 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.732323 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.733237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.748129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.807120 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.892705 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.894302 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.902362 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.930301 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.937316 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.937396 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.954577 4793 generic.go:334] "Generic (PLEG): container finished" podID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerID="89a414f6ad4eff334e7c77f7dc32f2bcaf583553d9176e51a8eb3503f48b90e9" exitCode=0 Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.954623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pf48f" event={"ID":"55b57ce0-a45c-41a2-a54b-68cd4bb0996f","Type":"ContainerDied","Data":"89a414f6ad4eff334e7c77f7dc32f2bcaf583553d9176e51a8eb3503f48b90e9"} Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.954656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pf48f" event={"ID":"55b57ce0-a45c-41a2-a54b-68cd4bb0996f","Type":"ContainerStarted","Data":"b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.015455 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.017271 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.021534 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.026698 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045744 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045809 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.046732 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.063591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.147165 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.149554 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.148199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.164035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.231857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.320591 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.346756 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.416148 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.500414 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 22:59:07 crc kubenswrapper[4793]: W0126 22:59:07.503500 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466a301d_4d91_4f49_831a_7d7f07ecd1bd.slice/crio-60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075 WatchSource:0}: Error finding container 60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075: Status 404 returned error can't find the container with id 60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.786791 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 22:59:07 crc kubenswrapper[4793]: W0126 22:59:07.787041 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05214d2f_078c_43c5_bf4d_c8d80580ce8f.slice/crio-c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb WatchSource:0}: Error finding container c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb: Status 404 returned error can't find the container with id c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.963326 4793 generic.go:334] "Generic (PLEG): container finished" podID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerID="4e539782db86a1f33f1e8d758da83db10df681ef54a932f02b39da2a8adb6252" exitCode=0 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.963402 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-bz7cb" event={"ID":"466a301d-4d91-4f49-831a-7d7f07ecd1bd","Type":"ContainerDied","Data":"4e539782db86a1f33f1e8d758da83db10df681ef54a932f02b39da2a8adb6252"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.963451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-bz7cb" event={"ID":"466a301d-4d91-4f49-831a-7d7f07ecd1bd","Type":"ContainerStarted","Data":"60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.965080 4793 generic.go:334] "Generic (PLEG): container finished" podID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerID="df5d497c42d8a41e51dcee4a447c2fbe1e944067666c916ccbedfee89f899ce4" exitCode=0 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.965156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" event={"ID":"a62f2482-4c97-42ed-92d7-1b0dc5319971","Type":"ContainerDied","Data":"df5d497c42d8a41e51dcee4a447c2fbe1e944067666c916ccbedfee89f899ce4"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.965226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" event={"ID":"a62f2482-4c97-42ed-92d7-1b0dc5319971","Type":"ContainerStarted","Data":"1c043068139cd4d00e5b5fe1b6d930a2b0283b12081e9e25bfb54c733f1d3512"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.966800 4793 generic.go:334] "Generic (PLEG): container finished" podID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerID="2b2b55f201bdc2b2f238cbda3668509af74a34b70e97e47426e1788b79f37c2e" exitCode=0 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.966900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-566zk" event={"ID":"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a","Type":"ContainerDied","Data":"2b2b55f201bdc2b2f238cbda3668509af74a34b70e97e47426e1788b79f37c2e"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.966931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-566zk" event={"ID":"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a","Type":"ContainerStarted","Data":"1d500a4e24123c868163e68fc793bc5704da2cb708739917273b7d7a3b805aa6"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.968065 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" event={"ID":"05214d2f-078c-43c5-bf4d-c8d80580ce8f","Type":"ContainerStarted","Data":"c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb"} Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.263636 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.377511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.377558 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.378686 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55b57ce0-a45c-41a2-a54b-68cd4bb0996f" (UID: "55b57ce0-a45c-41a2-a54b-68cd4bb0996f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.384360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt" (OuterVolumeSpecName: "kube-api-access-6cldt") pod "55b57ce0-a45c-41a2-a54b-68cd4bb0996f" (UID: "55b57ce0-a45c-41a2-a54b-68cd4bb0996f"). InnerVolumeSpecName "kube-api-access-6cldt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.479427 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.479482 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.986184 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.986182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pf48f" event={"ID":"55b57ce0-a45c-41a2-a54b-68cd4bb0996f","Type":"ContainerDied","Data":"b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b"} Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.986309 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.989386 4793 generic.go:334] "Generic (PLEG): container finished" podID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerID="5222c87c78a1d9a9de00844bf7dc07187aeb8cf7d19078f05c743d77635dc81a" exitCode=0 Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.989507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" event={"ID":"05214d2f-078c-43c5-bf4d-c8d80580ce8f","Type":"ContainerDied","Data":"5222c87c78a1d9a9de00844bf7dc07187aeb8cf7d19078f05c743d77635dc81a"} Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.350637 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.395314 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.395418 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.396402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" (UID: "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.396997 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.402591 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg" (OuterVolumeSpecName: "kube-api-access-vkbrg") pod "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" (UID: "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a"). InnerVolumeSpecName "kube-api-access-vkbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.438027 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.444019 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498006 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"a62f2482-4c97-42ed-92d7-1b0dc5319971\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"a62f2482-4c97-42ed-92d7-1b0dc5319971\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498100 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498159 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498403 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498630 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a62f2482-4c97-42ed-92d7-1b0dc5319971" (UID: "a62f2482-4c97-42ed-92d7-1b0dc5319971"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.499422 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "466a301d-4d91-4f49-831a-7d7f07ecd1bd" (UID: "466a301d-4d91-4f49-831a-7d7f07ecd1bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.501553 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8" (OuterVolumeSpecName: "kube-api-access-pmfg8") pod "466a301d-4d91-4f49-831a-7d7f07ecd1bd" (UID: "466a301d-4d91-4f49-831a-7d7f07ecd1bd"). InnerVolumeSpecName "kube-api-access-pmfg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.501642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr" (OuterVolumeSpecName: "kube-api-access-5kfrr") pod "a62f2482-4c97-42ed-92d7-1b0dc5319971" (UID: "a62f2482-4c97-42ed-92d7-1b0dc5319971"). InnerVolumeSpecName "kube-api-access-5kfrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600068 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600446 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600467 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600487 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.999304 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-566zk" event={"ID":"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a","Type":"ContainerDied","Data":"1d500a4e24123c868163e68fc793bc5704da2cb708739917273b7d7a3b805aa6"} Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.999347 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d500a4e24123c868163e68fc793bc5704da2cb708739917273b7d7a3b805aa6" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.999348 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.001208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-bz7cb" event={"ID":"466a301d-4d91-4f49-831a-7d7f07ecd1bd","Type":"ContainerDied","Data":"60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075"} Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.001242 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.001256 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.002832 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" event={"ID":"a62f2482-4c97-42ed-92d7-1b0dc5319971","Type":"ContainerDied","Data":"1c043068139cd4d00e5b5fe1b6d930a2b0283b12081e9e25bfb54c733f1d3512"} Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.002864 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.002867 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c043068139cd4d00e5b5fe1b6d930a2b0283b12081e9e25bfb54c733f1d3512" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.365789 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.436035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.436131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.436806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05214d2f-078c-43c5-bf4d-c8d80580ce8f" (UID: "05214d2f-078c-43c5-bf4d-c8d80580ce8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.440328 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59" (OuterVolumeSpecName: "kube-api-access-bjt59") pod "05214d2f-078c-43c5-bf4d-c8d80580ce8f" (UID: "05214d2f-078c-43c5-bf4d-c8d80580ce8f"). InnerVolumeSpecName "kube-api-access-bjt59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.538270 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.538311 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.014720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" event={"ID":"05214d2f-078c-43c5-bf4d-c8d80580ce8f","Type":"ContainerDied","Data":"c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb"} Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.014799 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb" Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.014958 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.816575 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.826255 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:13 crc kubenswrapper[4793]: I0126 22:59:13.776284 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" path="/var/lib/kubelet/pods/55b57ce0-a45c-41a2-a54b-68cd4bb0996f/volumes" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.804312 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805299 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805316 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805344 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805354 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805375 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805384 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805398 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805406 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805415 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805423 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805629 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805641 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805660 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805675 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805691 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.806329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.808649 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.812808 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.865599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.865723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.967706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.967816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.969957 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.989334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:17 crc kubenswrapper[4793]: I0126 22:59:17.134696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:17 crc kubenswrapper[4793]: I0126 22:59:17.393748 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.086568 4793 generic.go:334] "Generic (PLEG): container finished" podID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerID="cbbda6e47228c7c1d12d5254e7a2e2c7a763c76bfeae45a791419e764951d10e" exitCode=0 Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.086656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-hrh5x" event={"ID":"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a","Type":"ContainerDied","Data":"cbbda6e47228c7c1d12d5254e7a2e2c7a763c76bfeae45a791419e764951d10e"} Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.087003 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-hrh5x" event={"ID":"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a","Type":"ContainerStarted","Data":"73979830749be87b355aab2585ef0b9dd7cc4eb559480d528575bcafe7ac1ba2"} Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.322079 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.322166 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.322291 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.323135 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.323304 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587" gracePeriod=600 Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.094837 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587" exitCode=0 Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.094908 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587"} Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.096327 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3"} Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.096384 4793 scope.go:117] "RemoveContainer" containerID="2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.402678 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.514365 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.514469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.515360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" (UID: "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.520083 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl" (OuterVolumeSpecName: "kube-api-access-h6qwl") pod "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" (UID: "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a"). InnerVolumeSpecName "kube-api-access-h6qwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.616665 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.616714 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:20 crc kubenswrapper[4793]: I0126 22:59:20.106467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-hrh5x" event={"ID":"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a","Type":"ContainerDied","Data":"73979830749be87b355aab2585ef0b9dd7cc4eb559480d528575bcafe7ac1ba2"} Jan 26 22:59:20 crc kubenswrapper[4793]: I0126 22:59:20.106515 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73979830749be87b355aab2585ef0b9dd7cc4eb559480d528575bcafe7ac1ba2" Jan 26 22:59:20 crc kubenswrapper[4793]: I0126 22:59:20.106536 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.152176 4793 generic.go:334] "Generic (PLEG): container finished" podID="733142d4-49c6-4e25-a160-78aa1118d296" containerID="df044600c026e05e6cd9488f72ff0a79fad01760d11999e101be3902e689d983" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.152344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerDied","Data":"df044600c026e05e6cd9488f72ff0a79fad01760d11999e101be3902e689d983"} Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.157185 4793 generic.go:334] "Generic (PLEG): container finished" podID="4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a" containerID="ad58aa032b54033c90e8ecf954d6ad00a7b0dcbeedc6afe2b86ad4ad1a96e8ad" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.157236 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerDied","Data":"ad58aa032b54033c90e8ecf954d6ad00a7b0dcbeedc6afe2b86ad4ad1a96e8ad"} Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.161587 4793 generic.go:334] "Generic (PLEG): container finished" podID="70c43064-b9c2-4f01-9bc2-5431c5dca494" containerID="dc3e39837d8e86561cf472a2f6f2745af62ce503d34f942a6ce713fb061ed1f7" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.161685 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerDied","Data":"dc3e39837d8e86561cf472a2f6f2745af62ce503d34f942a6ce713fb061ed1f7"} Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.169997 4793 generic.go:334] "Generic (PLEG): container finished" podID="a8704cd7-a5e9-45ca-9886-cef2f797c7f1" containerID="c0e1c08938d407156158ed4ade62f90fafca0be9a3c8b26ccbdd350915d858cd" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.170059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerDied","Data":"c0e1c08938d407156158ed4ade62f90fafca0be9a3c8b26ccbdd350915d858cd"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.181648 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerStarted","Data":"6e04fb78473a6e1d823d46381810bb168dd34ceb2332a4d5a5d581ce4f6a63e5"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.182132 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.184383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerStarted","Data":"3c2d0ca4f2464e59b4d0807e8205259a305d4756f944fedfdb17e808c33b6aa6"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.184693 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.186435 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerStarted","Data":"bc28bd9572d761503b3bdb0f59c6102f6fb3440ad5db561b927a8a1b6ecf4ee9"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.186655 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.188556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerStarted","Data":"15e92c80edfd73c107b78d52200b524418ac73e88b5179aac2d42e70b0d9da08"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.188773 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.208427 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=37.762116054 podStartE2EDuration="49.208408519s" podCreationTimestamp="2026-01-26 22:58:35 +0000 UTC" firstStartedPulling="2026-01-26 22:58:37.134124464 +0000 UTC m=+1132.122895976" lastFinishedPulling="2026-01-26 22:58:48.580416919 +0000 UTC m=+1143.569188441" observedRunningTime="2026-01-26 22:59:24.20451751 +0000 UTC m=+1179.193289022" watchObservedRunningTime="2026-01-26 22:59:24.208408519 +0000 UTC m=+1179.197180031" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.238161 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.878208045 podStartE2EDuration="50.238142344s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:36.100101746 +0000 UTC m=+1131.088873258" lastFinishedPulling="2026-01-26 22:58:48.460036005 +0000 UTC m=+1143.448807557" observedRunningTime="2026-01-26 22:59:24.233304378 +0000 UTC m=+1179.222075920" watchObservedRunningTime="2026-01-26 22:59:24.238142344 +0000 UTC m=+1179.226913876" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.268735 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=38.324534733 podStartE2EDuration="50.268715082s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:36.554119305 +0000 UTC m=+1131.542890817" lastFinishedPulling="2026-01-26 22:58:48.498299654 +0000 UTC m=+1143.487071166" observedRunningTime="2026-01-26 22:59:24.262810197 +0000 UTC m=+1179.251581729" watchObservedRunningTime="2026-01-26 22:59:24.268715082 +0000 UTC m=+1179.257486594" Jan 26 22:59:35 crc kubenswrapper[4793]: I0126 22:59:35.552374 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:59:35 crc kubenswrapper[4793]: I0126 22:59:35.581894 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=49.530715379 podStartE2EDuration="1m1.581868468s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:36.413908046 +0000 UTC m=+1131.402679558" lastFinishedPulling="2026-01-26 22:58:48.465061135 +0000 UTC m=+1143.453832647" observedRunningTime="2026-01-26 22:59:24.288522518 +0000 UTC m=+1179.277294040" watchObservedRunningTime="2026-01-26 22:59:35.581868468 +0000 UTC m=+1190.570639970" Jan 26 22:59:35 crc kubenswrapper[4793]: I0126 22:59:35.897148 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.108842 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-zhxm7"] Jan 26 22:59:36 crc kubenswrapper[4793]: E0126 22:59:36.109132 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerName="mariadb-account-create-update" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.109148 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerName="mariadb-account-create-update" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.109311 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerName="mariadb-account-create-update" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.109799 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.113168 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.120228 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.120463 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.122066 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.129313 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-zhxm7"] Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.154646 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.258244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.258364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.258466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.360415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.360501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.360595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.366601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.373344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.376612 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.429468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.489544 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.911898 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-zhxm7"] Jan 26 22:59:37 crc kubenswrapper[4793]: I0126 22:59:37.285037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerStarted","Data":"35f6cec859d4c0cb4947d3b714b43934822523a0c753893837d0513004de5961"} Jan 26 22:59:43 crc kubenswrapper[4793]: I0126 22:59:43.329121 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerStarted","Data":"3e4bdec32ef11b18d32eb2dfb6bb5b1e0c14f224fc1e48974b53e00695952bb5"} Jan 26 22:59:43 crc kubenswrapper[4793]: I0126 22:59:43.343536 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-zhxm7" podStartSLOduration=1.4853572 podStartE2EDuration="7.343518272s" podCreationTimestamp="2026-01-26 22:59:36 +0000 UTC" firstStartedPulling="2026-01-26 22:59:36.916834451 +0000 UTC m=+1191.905605963" lastFinishedPulling="2026-01-26 22:59:42.774995493 +0000 UTC m=+1197.763767035" observedRunningTime="2026-01-26 22:59:43.343234944 +0000 UTC m=+1198.332006466" watchObservedRunningTime="2026-01-26 22:59:43.343518272 +0000 UTC m=+1198.332289784" Jan 26 22:59:46 crc kubenswrapper[4793]: I0126 22:59:46.366008 4793 generic.go:334] "Generic (PLEG): container finished" podID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerID="3e4bdec32ef11b18d32eb2dfb6bb5b1e0c14f224fc1e48974b53e00695952bb5" exitCode=0 Jan 26 22:59:46 crc kubenswrapper[4793]: I0126 22:59:46.366077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerDied","Data":"3e4bdec32ef11b18d32eb2dfb6bb5b1e0c14f224fc1e48974b53e00695952bb5"} Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.723131 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.845856 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.846092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.846138 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.854937 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7" (OuterVolumeSpecName: "kube-api-access-qpbg7") pod "288bfc86-c5a0-401c-bbda-ea48bbbd855c" (UID: "288bfc86-c5a0-401c-bbda-ea48bbbd855c"). InnerVolumeSpecName "kube-api-access-qpbg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.867408 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "288bfc86-c5a0-401c-bbda-ea48bbbd855c" (UID: "288bfc86-c5a0-401c-bbda-ea48bbbd855c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.887223 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data" (OuterVolumeSpecName: "config-data") pod "288bfc86-c5a0-401c-bbda-ea48bbbd855c" (UID: "288bfc86-c5a0-401c-bbda-ea48bbbd855c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.948861 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.948928 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.948972 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.386048 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerDied","Data":"35f6cec859d4c0cb4947d3b714b43934822523a0c753893837d0513004de5961"} Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.386110 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f6cec859d4c0cb4947d3b714b43934822523a0c753893837d0513004de5961" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.386145 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.599590 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:48 crc kubenswrapper[4793]: E0126 22:59:48.600085 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerName="keystone-db-sync" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.600120 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerName="keystone-db-sync" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.600332 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerName="keystone-db-sync" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.602158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.605565 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.605795 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.606018 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.606235 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.606510 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.616834 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.761310 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762511 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762716 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.763063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.792487 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-rn9xt"] Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.794004 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.799694 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-fvrsv" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.799930 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.800058 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.813597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-rn9xt"] Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864128 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864580 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864688 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864993 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.868384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.868727 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.870597 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.871747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.875330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.890809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.926148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068157 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068294 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.069295 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.082310 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.083322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.085477 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.093941 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.115119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.377623 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:49 crc kubenswrapper[4793]: W0126 22:59:49.380514 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e3c9439_9c67_49d0_92be_f7b5aea5e408.slice/crio-062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0 WatchSource:0}: Error finding container 062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0: Status 404 returned error can't find the container with id 062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0 Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.394009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerStarted","Data":"062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0"} Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.591035 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-rn9xt"] Jan 26 22:59:50 crc kubenswrapper[4793]: I0126 22:59:50.404649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerStarted","Data":"ae3549f4820c063cd76f871e76554cea9d2d893a0f2e1a36623be4d87e460a34"} Jan 26 22:59:50 crc kubenswrapper[4793]: I0126 22:59:50.412931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerStarted","Data":"fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f"} Jan 26 22:59:50 crc kubenswrapper[4793]: I0126 22:59:50.446001 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" podStartSLOduration=2.445971472 podStartE2EDuration="2.445971472s" podCreationTimestamp="2026-01-26 22:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:59:50.438032879 +0000 UTC m=+1205.426804391" watchObservedRunningTime="2026-01-26 22:59:50.445971472 +0000 UTC m=+1205.434742984" Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.425668 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerID="fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f" exitCode=0 Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.425735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerDied","Data":"fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f"} Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.427348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerStarted","Data":"42adf1d1b5d18c7fa51274887f38aad9de206abb589b3cb63586e984bd4dcb69"} Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.459683 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-rn9xt" podStartSLOduration=1.9339202979999999 podStartE2EDuration="4.459665277s" podCreationTimestamp="2026-01-26 22:59:48 +0000 UTC" firstStartedPulling="2026-01-26 22:59:49.588725088 +0000 UTC m=+1204.577496600" lastFinishedPulling="2026-01-26 22:59:52.114470067 +0000 UTC m=+1207.103241579" observedRunningTime="2026-01-26 22:59:52.455205181 +0000 UTC m=+1207.443976703" watchObservedRunningTime="2026-01-26 22:59:52.459665277 +0000 UTC m=+1207.448436789" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.794356 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.943737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944344 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944380 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.950786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.951536 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.952008 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts" (OuterVolumeSpecName: "scripts") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.951990 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg" (OuterVolumeSpecName: "kube-api-access-ntwrg") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "kube-api-access-ntwrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.972901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data" (OuterVolumeSpecName: "config-data") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.974455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045612 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045644 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045654 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045664 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045674 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045684 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.454777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerDied","Data":"062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0"} Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.454858 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.454891 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.470755 4793 generic.go:334] "Generic (PLEG): container finished" podID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerID="42adf1d1b5d18c7fa51274887f38aad9de206abb589b3cb63586e984bd4dcb69" exitCode=0 Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.470834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerDied","Data":"42adf1d1b5d18c7fa51274887f38aad9de206abb589b3cb63586e984bd4dcb69"} Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.992140 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.000648 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.080506 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-5hcwl"] Jan 26 22:59:55 crc kubenswrapper[4793]: E0126 22:59:55.080819 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerName="keystone-bootstrap" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.080836 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerName="keystone-bootstrap" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.080996 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerName="keystone-bootstrap" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.081525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086163 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086863 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086929 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.089521 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.110244 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-5hcwl"] Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.162595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.162844 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.162966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.163071 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.163160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.163297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264255 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264305 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.271470 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.271513 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.272265 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.272954 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.273643 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.301563 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.419651 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.778956 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" path="/var/lib/kubelet/pods/6e3c9439-9c67-49d0-92be-f7b5aea5e408/volumes" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.843231 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-5hcwl"] Jan 26 22:59:55 crc kubenswrapper[4793]: W0126 22:59:55.843918 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66c97183_25d5_4ca6_bede_619df68d1471.slice/crio-1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a WatchSource:0}: Error finding container 1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a: Status 404 returned error can't find the container with id 1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.856255 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.009896 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010149 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010283 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010658 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs" (OuterVolumeSpecName: "logs") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.011141 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.011660 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.015370 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5" (OuterVolumeSpecName: "kube-api-access-mrkm5") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "kube-api-access-mrkm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.017179 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts" (OuterVolumeSpecName: "scripts") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.031945 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data" (OuterVolumeSpecName: "config-data") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.033876 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113249 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113301 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113322 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113338 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.485461 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerDied","Data":"ae3549f4820c063cd76f871e76554cea9d2d893a0f2e1a36623be4d87e460a34"} Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.485499 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae3549f4820c063cd76f871e76554cea9d2d893a0f2e1a36623be4d87e460a34" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.485558 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.496649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerStarted","Data":"edeb56d663cc21ffac8620b620d9103115a9fe78e769cf71f65bf0ea0163950a"} Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.496887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerStarted","Data":"1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a"} Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.530425 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" podStartSLOduration=1.530403325 podStartE2EDuration="1.530403325s" podCreationTimestamp="2026-01-26 22:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:59:56.521041552 +0000 UTC m=+1211.509813054" watchObservedRunningTime="2026-01-26 22:59:56.530403325 +0000 UTC m=+1211.519174837" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.585257 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-655b5d9d44-xt9hv"] Jan 26 22:59:56 crc kubenswrapper[4793]: E0126 22:59:56.585562 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerName="placement-db-sync" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.585575 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerName="placement-db-sync" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.585722 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerName="placement-db-sync" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.586505 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.589864 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-fvrsv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.594295 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.594448 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-655b5d9d44-xt9hv"] Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.595977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-scripts\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729618 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-combined-ca-bundle\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6n6d\" (UniqueName: \"kubernetes.io/projected/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-kube-api-access-n6n6d\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-config-data\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-logs\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-logs\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-scripts\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-combined-ca-bundle\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6n6d\" (UniqueName: \"kubernetes.io/projected/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-kube-api-access-n6n6d\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-config-data\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-logs\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.838173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-scripts\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.838632 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-combined-ca-bundle\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.844909 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-config-data\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.849730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6n6d\" (UniqueName: \"kubernetes.io/projected/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-kube-api-access-n6n6d\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.910715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:57 crc kubenswrapper[4793]: I0126 22:59:57.321035 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-655b5d9d44-xt9hv"] Jan 26 22:59:57 crc kubenswrapper[4793]: W0126 22:59:57.337340 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aa4144_bcdd_4fb8_84f4_fb976c8c4a0e.slice/crio-0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a WatchSource:0}: Error finding container 0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a: Status 404 returned error can't find the container with id 0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a Jan 26 22:59:57 crc kubenswrapper[4793]: I0126 22:59:57.510038 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" event={"ID":"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e","Type":"ContainerStarted","Data":"0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a"} Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.523453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" event={"ID":"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e","Type":"ContainerStarted","Data":"3395d5f2d3a64863228c8a5a8c3f503786df769bcab98061231efead25feb9b9"} Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.524015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" event={"ID":"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e","Type":"ContainerStarted","Data":"5be80f442506a831ab76a7cf49181400847f78f458353f8b9971687e33034154"} Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.524058 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.524089 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.557759 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" podStartSLOduration=2.5577364620000003 podStartE2EDuration="2.557736462s" podCreationTimestamp="2026-01-26 22:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:59:58.542144545 +0000 UTC m=+1213.530916087" watchObservedRunningTime="2026-01-26 22:59:58.557736462 +0000 UTC m=+1213.546507964" Jan 26 22:59:59 crc kubenswrapper[4793]: I0126 22:59:59.532663 4793 generic.go:334] "Generic (PLEG): container finished" podID="66c97183-25d5-4ca6-bede-619df68d1471" containerID="edeb56d663cc21ffac8620b620d9103115a9fe78e769cf71f65bf0ea0163950a" exitCode=0 Jan 26 22:59:59 crc kubenswrapper[4793]: I0126 22:59:59.532808 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerDied","Data":"edeb56d663cc21ffac8620b620d9103115a9fe78e769cf71f65bf0ea0163950a"} Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.148115 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s"] Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.152694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.160174 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.163660 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.187627 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s"] Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.286548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.286772 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.286808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.388938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.389060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.389105 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.390968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.396087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.420927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.504565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.908872 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999384 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999513 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999613 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999727 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.006246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.006575 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.006819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr" (OuterVolumeSpecName: "kube-api-access-jbbbr") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "kube-api-access-jbbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.007019 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts" (OuterVolumeSpecName: "scripts") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.034375 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data" (OuterVolumeSpecName: "config-data") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.044823 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.049686 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s"] Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.102271 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.102635 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.102839 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.103030 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.103230 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.103419 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.554476 4793 generic.go:334] "Generic (PLEG): container finished" podID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerID="ef3d2d9cccb326a3ea1d8eee537506e17562bf7264a5c79f77fa04751d2cee45" exitCode=0 Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.554540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" event={"ID":"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89","Type":"ContainerDied","Data":"ef3d2d9cccb326a3ea1d8eee537506e17562bf7264a5c79f77fa04751d2cee45"} Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.554843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" event={"ID":"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89","Type":"ContainerStarted","Data":"2b632696e958cc87fbdb7bb5241e6abfe3807fc3828296466a72c7d0491bea3b"} Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.557105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerDied","Data":"1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a"} Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.557135 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.557469 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.636502 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-564b65fb54-5sx8h"] Jan 26 23:00:01 crc kubenswrapper[4793]: E0126 23:00:01.636900 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c97183-25d5-4ca6-bede-619df68d1471" containerName="keystone-bootstrap" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.636921 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c97183-25d5-4ca6-bede-619df68d1471" containerName="keystone-bootstrap" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.637129 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c97183-25d5-4ca6-bede-619df68d1471" containerName="keystone-bootstrap" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.637768 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.640116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.640946 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.641046 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.641423 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.659892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-564b65fb54-5sx8h"] Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-credential-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-494l6\" (UniqueName: \"kubernetes.io/projected/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-kube-api-access-494l6\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714742 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-scripts\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-combined-ca-bundle\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-fernet-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-config-data\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-config-data\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-credential-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816775 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-494l6\" (UniqueName: \"kubernetes.io/projected/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-kube-api-access-494l6\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816823 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-combined-ca-bundle\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-scripts\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-fernet-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.822348 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-credential-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.822421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-fernet-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.822483 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-combined-ca-bundle\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.824750 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-scripts\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.833285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-494l6\" (UniqueName: \"kubernetes.io/projected/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-kube-api-access-494l6\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.840228 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-config-data\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.958623 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.437449 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-564b65fb54-5sx8h"] Jan 26 23:00:02 crc kubenswrapper[4793]: W0126 23:00:02.446162 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dfd7a46_fa72_4bfe_9f03_7ac1c0fed348.slice/crio-14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e WatchSource:0}: Error finding container 14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e: Status 404 returned error can't find the container with id 14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.564971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" event={"ID":"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348","Type":"ContainerStarted","Data":"14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e"} Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.842801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.932873 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.932965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.933059 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.933908 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" (UID: "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.937046 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2" (OuterVolumeSpecName: "kube-api-access-bfxr2") pod "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" (UID: "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89"). InnerVolumeSpecName "kube-api-access-bfxr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.937537 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" (UID: "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.034626 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.034657 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.034667 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.576845 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.576886 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" event={"ID":"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89","Type":"ContainerDied","Data":"2b632696e958cc87fbdb7bb5241e6abfe3807fc3828296466a72c7d0491bea3b"} Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.576939 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b632696e958cc87fbdb7bb5241e6abfe3807fc3828296466a72c7d0491bea3b" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.579573 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" event={"ID":"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348","Type":"ContainerStarted","Data":"3bb7392c381a55291adae0f08b831cbd66b3755881e2d7a971b72ce159c9c567"} Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.579806 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.614091 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" podStartSLOduration=2.613972064 podStartE2EDuration="2.613972064s" podCreationTimestamp="2026-01-26 23:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:00:03.604719984 +0000 UTC m=+1218.593491526" watchObservedRunningTime="2026-01-26 23:00:03.613972064 +0000 UTC m=+1218.602743616" Jan 26 23:00:28 crc kubenswrapper[4793]: I0126 23:00:28.078822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 23:00:28 crc kubenswrapper[4793]: I0126 23:00:28.087280 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 23:00:33 crc kubenswrapper[4793]: I0126 23:00:33.368474 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.598602 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 26 23:00:36 crc kubenswrapper[4793]: E0126 23:00:36.600404 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerName="collect-profiles" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.600423 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerName="collect-profiles" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.600596 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerName="collect-profiles" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.601044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.605552 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.605888 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.606203 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-p8bsf" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.607725 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config-secret\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741283 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741341 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k57t\" (UniqueName: \"kubernetes.io/projected/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-kube-api-access-7k57t\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k57t\" (UniqueName: \"kubernetes.io/projected/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-kube-api-access-7k57t\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config-secret\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.843784 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.847929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config-secret\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.857692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.874322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k57t\" (UniqueName: \"kubernetes.io/projected/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-kube-api-access-7k57t\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.922961 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 26 23:00:37 crc kubenswrapper[4793]: I0126 23:00:37.193910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 26 23:00:37 crc kubenswrapper[4793]: I0126 23:00:37.886898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"dfe7ef28-7eb1-48c8-b2bb-8990933e8971","Type":"ContainerStarted","Data":"5f778100eddf01a756eba7bf0b72af0e7334c772ac8e13f0dd5c47c2475207b6"} Jan 26 23:00:45 crc kubenswrapper[4793]: I0126 23:00:45.949738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"dfe7ef28-7eb1-48c8-b2bb-8990933e8971","Type":"ContainerStarted","Data":"aa49776f8519ee53d4d7acf0dd08f0be7d10021a30eee7f26524c2163e6ee7bb"} Jan 26 23:00:45 crc kubenswrapper[4793]: I0126 23:00:45.972039 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=2.193053141 podStartE2EDuration="9.972020649s" podCreationTimestamp="2026-01-26 23:00:36 +0000 UTC" firstStartedPulling="2026-01-26 23:00:37.195953211 +0000 UTC m=+1252.184724723" lastFinishedPulling="2026-01-26 23:00:44.974920719 +0000 UTC m=+1259.963692231" observedRunningTime="2026-01-26 23:00:45.965825905 +0000 UTC m=+1260.954597447" watchObservedRunningTime="2026-01-26 23:00:45.972020649 +0000 UTC m=+1260.960792171" Jan 26 23:00:57 crc kubenswrapper[4793]: I0126 23:00:57.903691 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 23:00:57 crc kubenswrapper[4793]: I0126 23:00:57.904305 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" containerID="cri-o://27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76" gracePeriod=10 Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.065932 4793 generic.go:334] "Generic (PLEG): container finished" podID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerID="27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76" exitCode=0 Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.065975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerDied","Data":"27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76"} Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.434077 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.434899 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" containerID="cri-o://4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d" gracePeriod=10 Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.443419 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-864fr"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.444510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.454415 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.455354 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.465513 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-6sh7v" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.466294 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.484597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-864fr"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.535949 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkchx\" (UniqueName: \"kubernetes.io/projected/209b817f-0bec-4d6b-814c-ae2a07913a56-kube-api-access-zkchx\") pod \"nova-operator-controller-manager-5b4fc7b894-6xkrk\" (UID: \"209b817f-0bec-4d6b-814c-ae2a07913a56\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.536017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqm9\" (UniqueName: \"kubernetes.io/projected/c0a1696e-8859-4708-84f1-57b65d7cc16a-kube-api-access-wqqm9\") pod \"nova-operator-index-864fr\" (UID: \"c0a1696e-8859-4708-84f1-57b65d7cc16a\") " pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.603529 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.637070 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkchx\" (UniqueName: \"kubernetes.io/projected/209b817f-0bec-4d6b-814c-ae2a07913a56-kube-api-access-zkchx\") pod \"nova-operator-controller-manager-5b4fc7b894-6xkrk\" (UID: \"209b817f-0bec-4d6b-814c-ae2a07913a56\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.637139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqm9\" (UniqueName: \"kubernetes.io/projected/c0a1696e-8859-4708-84f1-57b65d7cc16a-kube-api-access-wqqm9\") pod \"nova-operator-index-864fr\" (UID: \"c0a1696e-8859-4708-84f1-57b65d7cc16a\") " pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.673855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkchx\" (UniqueName: \"kubernetes.io/projected/209b817f-0bec-4d6b-814c-ae2a07913a56-kube-api-access-zkchx\") pod \"nova-operator-controller-manager-5b4fc7b894-6xkrk\" (UID: \"209b817f-0bec-4d6b-814c-ae2a07913a56\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.687790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqm9\" (UniqueName: \"kubernetes.io/projected/c0a1696e-8859-4708-84f1-57b65d7cc16a-kube-api-access-wqqm9\") pod \"nova-operator-index-864fr\" (UID: \"c0a1696e-8859-4708-84f1-57b65d7cc16a\") " pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.737882 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"c130e1aa-1f05-45ff-8364-714b79fa7282\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.746493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4" (OuterVolumeSpecName: "kube-api-access-7gwc4") pod "c130e1aa-1f05-45ff-8364-714b79fa7282" (UID: "c130e1aa-1f05-45ff-8364-714b79fa7282"). InnerVolumeSpecName "kube-api-access-7gwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.839187 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.912059 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.933873 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.093676 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerDied","Data":"a5c867b9927299be310ccbc0cb616dae9ad68f5d7f00ed7bfa18b82b1aeee089"} Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.093700 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.093946 4793 scope.go:117] "RemoveContainer" containerID="27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097014 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerID="4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d" exitCode=0 Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerDied","Data":"4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d"} Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerDied","Data":"e948222ad969867dfca2ac3068c15d29f9c21e465ef410e50da6d8c9a179da98"} Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097090 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e948222ad969867dfca2ac3068c15d29f9c21e465ef410e50da6d8c9a179da98" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.118010 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.142842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.152365 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.249069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"3a8dce43-2d79-4865-b17b-c73d6809865d\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.259138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7" (OuterVolumeSpecName: "kube-api-access-gnkk7") pod "3a8dce43-2d79-4865-b17b-c73d6809865d" (UID: "3a8dce43-2d79-4865-b17b-c73d6809865d"). InnerVolumeSpecName "kube-api-access-gnkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.350659 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.433634 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-864fr"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.506460 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.773272 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" path="/var/lib/kubelet/pods/c130e1aa-1f05-45ff-8364-714b79fa7282/volumes" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.105136 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" event={"ID":"209b817f-0bec-4d6b-814c-ae2a07913a56","Type":"ContainerStarted","Data":"7158bf42d356c8ab937717c85403f66c562ae7cc47f3757935b34b5c54e24a0d"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.105220 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" event={"ID":"209b817f-0bec-4d6b-814c-ae2a07913a56","Type":"ContainerStarted","Data":"8af648461e636b5cfbfd62ec09835634b28050a89febd9aa985dd87f13b014d6"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.105239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.106746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-864fr" event={"ID":"c0a1696e-8859-4708-84f1-57b65d7cc16a","Type":"ContainerStarted","Data":"9bc06a03cfa59b765710ac34e1759e4ed6d47a27f44674e2a5d6fc7078255302"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.106769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-864fr" event={"ID":"c0a1696e-8859-4708-84f1-57b65d7cc16a","Type":"ContainerStarted","Data":"9617a2561dd7c9268bc956e249d81fe520a078ea6f6a5e9bdbac901eeb86922c"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.107726 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.128815 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" podStartSLOduration=2.128793276 podStartE2EDuration="2.128793276s" podCreationTimestamp="2026-01-26 23:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:01:00.120313498 +0000 UTC m=+1275.109085010" watchObservedRunningTime="2026-01-26 23:01:00.128793276 +0000 UTC m=+1275.117564788" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.138935 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-cron-29491141-mgww5"] Jan 26 23:01:00 crc kubenswrapper[4793]: E0126 23:01:00.139306 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139327 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" Jan 26 23:01:00 crc kubenswrapper[4793]: E0126 23:01:00.139338 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139346 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139534 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139558 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.140168 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.148265 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.178485 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.186415 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-cron-29491141-mgww5"] Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.194246 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-864fr" podStartSLOduration=1.992319285 podStartE2EDuration="2.194226163s" podCreationTimestamp="2026-01-26 23:00:58 +0000 UTC" firstStartedPulling="2026-01-26 23:00:59.428887409 +0000 UTC m=+1274.417658921" lastFinishedPulling="2026-01-26 23:00:59.630794287 +0000 UTC m=+1274.619565799" observedRunningTime="2026-01-26 23:01:00.156065912 +0000 UTC m=+1275.144837444" watchObservedRunningTime="2026-01-26 23:01:00.194226163 +0000 UTC m=+1275.182997675" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262600 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262698 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.363721 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.363869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.364908 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.364961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.370657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.381875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.390042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.393151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.454242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.746140 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-cron-29491141-mgww5"] Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.117312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerStarted","Data":"510edd1d58864f0ad4c2e90e3e555c8b700644d5d6c6bb95477b04e71a7bf309"} Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.117364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerStarted","Data":"f1418c6e20b5c23b9316cce52eaa87883fcaf7b57fc4c52015ff85562e5cf3bb"} Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.135414 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" podStartSLOduration=1.135394303 podStartE2EDuration="1.135394303s" podCreationTimestamp="2026-01-26 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:01:01.133390496 +0000 UTC m=+1276.122162008" watchObservedRunningTime="2026-01-26 23:01:01.135394303 +0000 UTC m=+1276.124165815" Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.770958 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" path="/var/lib/kubelet/pods/3a8dce43-2d79-4865-b17b-c73d6809865d/volumes" Jan 26 23:01:03 crc kubenswrapper[4793]: I0126 23:01:03.137067 4793 generic.go:334] "Generic (PLEG): container finished" podID="86415a48-3c2d-430b-a38c-f43e1c518984" containerID="510edd1d58864f0ad4c2e90e3e555c8b700644d5d6c6bb95477b04e71a7bf309" exitCode=0 Jan 26 23:01:03 crc kubenswrapper[4793]: I0126 23:01:03.137119 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerDied","Data":"510edd1d58864f0ad4c2e90e3e555c8b700644d5d6c6bb95477b04e71a7bf309"} Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.498172 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651183 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651345 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651383 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651464 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.657070 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.659780 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx" (OuterVolumeSpecName: "kube-api-access-xhtzx") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "kube-api-access-xhtzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.674228 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.694660 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data" (OuterVolumeSpecName: "config-data") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753000 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753034 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753044 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753054 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:05 crc kubenswrapper[4793]: I0126 23:01:05.151102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerDied","Data":"f1418c6e20b5c23b9316cce52eaa87883fcaf7b57fc4c52015ff85562e5cf3bb"} Jan 26 23:01:05 crc kubenswrapper[4793]: I0126 23:01:05.151485 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1418c6e20b5c23b9316cce52eaa87883fcaf7b57fc4c52015ff85562e5cf3bb" Jan 26 23:01:05 crc kubenswrapper[4793]: I0126 23:01:05.151173 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.912774 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.913319 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.944979 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.964641 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:09 crc kubenswrapper[4793]: I0126 23:01:09.215979 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.238149 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj"] Jan 26 23:01:15 crc kubenswrapper[4793]: E0126 23:01:15.238959 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86415a48-3c2d-430b-a38c-f43e1c518984" containerName="keystone-cron" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.238973 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86415a48-3c2d-430b-a38c-f43e1c518984" containerName="keystone-cron" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.239126 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86415a48-3c2d-430b-a38c-f43e1c518984" containerName="keystone-cron" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.240136 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.242201 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-97km8" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.254432 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj"] Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.306541 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.306636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.306804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408476 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408998 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.409036 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.426766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.580996 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:16 crc kubenswrapper[4793]: I0126 23:01:16.047953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj"] Jan 26 23:01:16 crc kubenswrapper[4793]: I0126 23:01:16.256584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerStarted","Data":"223acbb2112ca2a3d9ad487a70b7c917e0e1a36ebc9f6ec9d51871ef60679ac3"} Jan 26 23:01:17 crc kubenswrapper[4793]: I0126 23:01:17.264824 4793 generic.go:334] "Generic (PLEG): container finished" podID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerID="a4745030e8ec4c487c22710d1f886bb405f34f7e81e13a2cf7aa864f4c721a34" exitCode=0 Jan 26 23:01:17 crc kubenswrapper[4793]: I0126 23:01:17.265177 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"a4745030e8ec4c487c22710d1f886bb405f34f7e81e13a2cf7aa864f4c721a34"} Jan 26 23:01:18 crc kubenswrapper[4793]: I0126 23:01:18.322719 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:01:18 crc kubenswrapper[4793]: I0126 23:01:18.323084 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:01:24 crc kubenswrapper[4793]: I0126 23:01:24.327977 4793 generic.go:334] "Generic (PLEG): container finished" podID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerID="77fd0c333b126c4a68f3d46a8f89c8cc529d2716cd15ab65a4dd0c736b61937a" exitCode=0 Jan 26 23:01:24 crc kubenswrapper[4793]: I0126 23:01:24.328148 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"77fd0c333b126c4a68f3d46a8f89c8cc529d2716cd15ab65a4dd0c736b61937a"} Jan 26 23:01:25 crc kubenswrapper[4793]: I0126 23:01:25.337517 4793 generic.go:334] "Generic (PLEG): container finished" podID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerID="fa3aa6e996ad3b391dbd7651793d78bb8ab394e82ddca76a2e50f865b1270576" exitCode=0 Jan 26 23:01:25 crc kubenswrapper[4793]: I0126 23:01:25.337561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"fa3aa6e996ad3b391dbd7651793d78bb8ab394e82ddca76a2e50f865b1270576"} Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.662236 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.783693 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"e023476b-0850-4e37-97da-0cd1a5d9425f\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.783796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"e023476b-0850-4e37-97da-0cd1a5d9425f\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.784296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"e023476b-0850-4e37-97da-0cd1a5d9425f\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.786312 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle" (OuterVolumeSpecName: "bundle") pod "e023476b-0850-4e37-97da-0cd1a5d9425f" (UID: "e023476b-0850-4e37-97da-0cd1a5d9425f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.790730 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4" (OuterVolumeSpecName: "kube-api-access-jctk4") pod "e023476b-0850-4e37-97da-0cd1a5d9425f" (UID: "e023476b-0850-4e37-97da-0cd1a5d9425f"). InnerVolumeSpecName "kube-api-access-jctk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.798524 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util" (OuterVolumeSpecName: "util") pod "e023476b-0850-4e37-97da-0cd1a5d9425f" (UID: "e023476b-0850-4e37-97da-0cd1a5d9425f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.886398 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.886438 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.886453 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:27 crc kubenswrapper[4793]: I0126 23:01:27.356737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"223acbb2112ca2a3d9ad487a70b7c917e0e1a36ebc9f6ec9d51871ef60679ac3"} Jan 26 23:01:27 crc kubenswrapper[4793]: I0126 23:01:27.356806 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:27 crc kubenswrapper[4793]: I0126 23:01:27.356813 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="223acbb2112ca2a3d9ad487a70b7c917e0e1a36ebc9f6ec9d51871ef60679ac3" Jan 26 23:01:48 crc kubenswrapper[4793]: I0126 23:01:48.322604 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:01:48 crc kubenswrapper[4793]: I0126 23:01:48.323158 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.322621 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.323250 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.323319 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.324223 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.324294 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3" gracePeriod=600 Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.741781 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3" exitCode=0 Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.741860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3"} Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.742151 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497"} Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.742174 4793 scope.go:117] "RemoveContainer" containerID="0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.007147 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:02:43 crc kubenswrapper[4793]: E0126 23:02:43.007967 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="util" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.007980 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="util" Jan 26 23:02:43 crc kubenswrapper[4793]: E0126 23:02:43.008017 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="extract" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.008023 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="extract" Jan 26 23:02:43 crc kubenswrapper[4793]: E0126 23:02:43.008056 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="pull" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.008062 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="pull" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.008227 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="extract" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.009220 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.031254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.105835 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.105928 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.106041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.207854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.207931 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.207990 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.208484 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.208628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.247340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.332130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.852176 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.935114 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerStarted","Data":"59673bd85054904a5f4197ca518a84e664721612c44bea1adaa36581d08d7ad7"} Jan 26 23:02:44 crc kubenswrapper[4793]: I0126 23:02:44.943989 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerID="10b042a5c8dd85bb9307f96395ba6d325886a51a7c87d21538e763bc64424eea" exitCode=0 Jan 26 23:02:44 crc kubenswrapper[4793]: I0126 23:02:44.944074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"10b042a5c8dd85bb9307f96395ba6d325886a51a7c87d21538e763bc64424eea"} Jan 26 23:02:44 crc kubenswrapper[4793]: I0126 23:02:44.946860 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:02:46 crc kubenswrapper[4793]: I0126 23:02:46.964895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerStarted","Data":"3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992"} Jan 26 23:02:47 crc kubenswrapper[4793]: I0126 23:02:47.975350 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerID="3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992" exitCode=0 Jan 26 23:02:47 crc kubenswrapper[4793]: I0126 23:02:47.975399 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992"} Jan 26 23:02:49 crc kubenswrapper[4793]: I0126 23:02:49.992926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerStarted","Data":"5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2"} Jan 26 23:02:50 crc kubenswrapper[4793]: I0126 23:02:50.038264 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qxcn" podStartSLOduration=3.976172911 podStartE2EDuration="8.038235121s" podCreationTimestamp="2026-01-26 23:02:42 +0000 UTC" firstStartedPulling="2026-01-26 23:02:44.946666689 +0000 UTC m=+1379.935438201" lastFinishedPulling="2026-01-26 23:02:49.008728899 +0000 UTC m=+1383.997500411" observedRunningTime="2026-01-26 23:02:50.030587366 +0000 UTC m=+1385.019358888" watchObservedRunningTime="2026-01-26 23:02:50.038235121 +0000 UTC m=+1385.027006633" Jan 26 23:02:53 crc kubenswrapper[4793]: I0126 23:02:53.332399 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:53 crc kubenswrapper[4793]: I0126 23:02:53.332769 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:54 crc kubenswrapper[4793]: I0126 23:02:54.384207 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qxcn" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" probeResult="failure" output=< Jan 26 23:02:54 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 26 23:02:54 crc kubenswrapper[4793]: > Jan 26 23:03:03 crc kubenswrapper[4793]: I0126 23:03:03.395678 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:03 crc kubenswrapper[4793]: I0126 23:03:03.436068 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:05 crc kubenswrapper[4793]: I0126 23:03:05.805220 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:03:05 crc kubenswrapper[4793]: I0126 23:03:05.805929 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qxcn" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" containerID="cri-o://5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2" gracePeriod=2 Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.132812 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerID="5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2" exitCode=0 Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.132835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2"} Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.213443 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.269546 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.269681 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.269792 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.270679 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities" (OuterVolumeSpecName: "utilities") pod "f3348a11-eb42-42bd-bf6e-8d1f08402fc2" (UID: "f3348a11-eb42-42bd-bf6e-8d1f08402fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.275964 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt" (OuterVolumeSpecName: "kube-api-access-7vfjt") pod "f3348a11-eb42-42bd-bf6e-8d1f08402fc2" (UID: "f3348a11-eb42-42bd-bf6e-8d1f08402fc2"). InnerVolumeSpecName "kube-api-access-7vfjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.371290 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.371320 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") on node \"crc\" DevicePath \"\"" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.393512 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3348a11-eb42-42bd-bf6e-8d1f08402fc2" (UID: "f3348a11-eb42-42bd-bf6e-8d1f08402fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.472674 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.141587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"59673bd85054904a5f4197ca518a84e664721612c44bea1adaa36581d08d7ad7"} Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.141636 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.141640 4793 scope.go:117] "RemoveContainer" containerID="5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.163540 4793 scope.go:117] "RemoveContainer" containerID="3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.172499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.184169 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.200752 4793 scope.go:117] "RemoveContainer" containerID="10b042a5c8dd85bb9307f96395ba6d325886a51a7c87d21538e763bc64424eea" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.773545 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" path="/var/lib/kubelet/pods/f3348a11-eb42-42bd-bf6e-8d1f08402fc2/volumes" Jan 26 23:03:46 crc kubenswrapper[4793]: I0126 23:03:46.471359 4793 scope.go:117] "RemoveContainer" containerID="4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.632125 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:08 crc kubenswrapper[4793]: E0126 23:04:08.632983 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-content" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.632997 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-content" Jan 26 23:04:08 crc kubenswrapper[4793]: E0126 23:04:08.633053 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.633060 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" Jan 26 23:04:08 crc kubenswrapper[4793]: E0126 23:04:08.633079 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-utilities" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.633086 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-utilities" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.633243 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.634283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.641443 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.689265 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.689326 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.689462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791610 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791778 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.792211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.813392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.980175 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:09 crc kubenswrapper[4793]: I0126 23:04:09.575579 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:09 crc kubenswrapper[4793]: I0126 23:04:09.627834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerStarted","Data":"699fc96e963d34aae177ddd3239e0f6efefb165d36950a2ff83786b3615f0891"} Jan 26 23:04:10 crc kubenswrapper[4793]: I0126 23:04:10.638003 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerID="2ebbfbd36a4f00a429af72e8bd75253733316bd4275f445fe8f46a09b8506eac" exitCode=0 Jan 26 23:04:10 crc kubenswrapper[4793]: I0126 23:04:10.638112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"2ebbfbd36a4f00a429af72e8bd75253733316bd4275f445fe8f46a09b8506eac"} Jan 26 23:04:13 crc kubenswrapper[4793]: I0126 23:04:13.663419 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerID="7959fa44827f66b3423528084a9583b377c547bf8bebf66abccbbc5d445f2a11" exitCode=0 Jan 26 23:04:13 crc kubenswrapper[4793]: I0126 23:04:13.663480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"7959fa44827f66b3423528084a9583b377c547bf8bebf66abccbbc5d445f2a11"} Jan 26 23:04:16 crc kubenswrapper[4793]: I0126 23:04:16.710860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerStarted","Data":"64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2"} Jan 26 23:04:16 crc kubenswrapper[4793]: I0126 23:04:16.737402 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7b2qz" podStartSLOduration=4.047006454 podStartE2EDuration="8.73738343s" podCreationTimestamp="2026-01-26 23:04:08 +0000 UTC" firstStartedPulling="2026-01-26 23:04:10.639629328 +0000 UTC m=+1465.628400850" lastFinishedPulling="2026-01-26 23:04:15.330006314 +0000 UTC m=+1470.318777826" observedRunningTime="2026-01-26 23:04:16.733153725 +0000 UTC m=+1471.721925247" watchObservedRunningTime="2026-01-26 23:04:16.73738343 +0000 UTC m=+1471.726154942" Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.322997 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.323095 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.982087 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.982658 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:19 crc kubenswrapper[4793]: I0126 23:04:19.033507 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:20 crc kubenswrapper[4793]: I0126 23:04:20.805264 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:20 crc kubenswrapper[4793]: I0126 23:04:20.863508 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:22 crc kubenswrapper[4793]: I0126 23:04:22.755857 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7b2qz" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" containerID="cri-o://64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2" gracePeriod=2 Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.766294 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerID="64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2" exitCode=0 Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.769716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2"} Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.832775 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.943249 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"9e05e40a-7d6e-4928-ae25-a286d1d55532\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.943384 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"9e05e40a-7d6e-4928-ae25-a286d1d55532\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.943472 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"9e05e40a-7d6e-4928-ae25-a286d1d55532\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.944245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities" (OuterVolumeSpecName: "utilities") pod "9e05e40a-7d6e-4928-ae25-a286d1d55532" (UID: "9e05e40a-7d6e-4928-ae25-a286d1d55532"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.951542 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84" (OuterVolumeSpecName: "kube-api-access-cbc84") pod "9e05e40a-7d6e-4928-ae25-a286d1d55532" (UID: "9e05e40a-7d6e-4928-ae25-a286d1d55532"). InnerVolumeSpecName "kube-api-access-cbc84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.045673 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") on node \"crc\" DevicePath \"\"" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.045708 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.776061 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"699fc96e963d34aae177ddd3239e0f6efefb165d36950a2ff83786b3615f0891"} Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.776127 4793 scope.go:117] "RemoveContainer" containerID="64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.776131 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.795450 4793 scope.go:117] "RemoveContainer" containerID="7959fa44827f66b3423528084a9583b377c547bf8bebf66abccbbc5d445f2a11" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.811511 4793 scope.go:117] "RemoveContainer" containerID="2ebbfbd36a4f00a429af72e8bd75253733316bd4275f445fe8f46a09b8506eac" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.967070 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e05e40a-7d6e-4928-ae25-a286d1d55532" (UID: "9e05e40a-7d6e-4928-ae25-a286d1d55532"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.060034 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.109805 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.121064 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.770858 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" path="/var/lib/kubelet/pods/9e05e40a-7d6e-4928-ae25-a286d1d55532/volumes" Jan 26 23:04:48 crc kubenswrapper[4793]: I0126 23:04:48.322583 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:04:48 crc kubenswrapper[4793]: I0126 23:04:48.323237 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.322461 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323050 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323098 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323779 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323838 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" gracePeriod=600 Jan 26 23:05:18 crc kubenswrapper[4793]: E0126 23:05:18.590629 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.186751 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" exitCode=0 Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.186795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497"} Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.186825 4793 scope.go:117] "RemoveContainer" containerID="c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3" Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.187585 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:05:19 crc kubenswrapper[4793]: E0126 23:05:19.188082 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:32 crc kubenswrapper[4793]: I0126 23:05:32.761509 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:05:32 crc kubenswrapper[4793]: E0126 23:05:32.762270 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.573624 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:36 crc kubenswrapper[4793]: E0126 23:05:36.574245 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-utilities" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574260 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-utilities" Jan 26 23:05:36 crc kubenswrapper[4793]: E0126 23:05:36.574285 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-content" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574292 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-content" Jan 26 23:05:36 crc kubenswrapper[4793]: E0126 23:05:36.574313 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574319 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574467 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.575466 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.599311 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.686870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.688049 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.688125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.789908 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.789988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.790097 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.790619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.790641 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.809380 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.896548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:37 crc kubenswrapper[4793]: I0126 23:05:37.383602 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:37 crc kubenswrapper[4793]: I0126 23:05:37.963713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerStarted","Data":"b2c7be3b64f188ea94ff64c5424b363e760f760a871b1a440341ca934af4e468"} Jan 26 23:05:38 crc kubenswrapper[4793]: I0126 23:05:38.971706 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" exitCode=0 Jan 26 23:05:38 crc kubenswrapper[4793]: I0126 23:05:38.971750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842"} Jan 26 23:05:43 crc kubenswrapper[4793]: I0126 23:05:43.004558 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" exitCode=0 Jan 26 23:05:43 crc kubenswrapper[4793]: I0126 23:05:43.004625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8"} Jan 26 23:05:44 crc kubenswrapper[4793]: I0126 23:05:44.014507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerStarted","Data":"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b"} Jan 26 23:05:44 crc kubenswrapper[4793]: I0126 23:05:44.040085 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n62d9" podStartSLOduration=3.238608182 podStartE2EDuration="8.040065515s" podCreationTimestamp="2026-01-26 23:05:36 +0000 UTC" firstStartedPulling="2026-01-26 23:05:38.973451499 +0000 UTC m=+1553.962223011" lastFinishedPulling="2026-01-26 23:05:43.774908822 +0000 UTC m=+1558.763680344" observedRunningTime="2026-01-26 23:05:44.0307054 +0000 UTC m=+1559.019476942" watchObservedRunningTime="2026-01-26 23:05:44.040065515 +0000 UTC m=+1559.028837027" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.575550 4793 scope.go:117] "RemoveContainer" containerID="89a414f6ad4eff334e7c77f7dc32f2bcaf583553d9176e51a8eb3503f48b90e9" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.897561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.897611 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.935789 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:47 crc kubenswrapper[4793]: I0126 23:05:47.761473 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:05:47 crc kubenswrapper[4793]: E0126 23:05:47.762110 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:56 crc kubenswrapper[4793]: I0126 23:05:56.942755 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:56 crc kubenswrapper[4793]: I0126 23:05:56.987329 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:57 crc kubenswrapper[4793]: I0126 23:05:57.106903 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n62d9" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" containerID="cri-o://896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" gracePeriod=2 Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.051709 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116809 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" exitCode=0 Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b"} Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116894 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"b2c7be3b64f188ea94ff64c5424b363e760f760a871b1a440341ca934af4e468"} Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116918 4793 scope.go:117] "RemoveContainer" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116916 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.128395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.128474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.128593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.129350 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities" (OuterVolumeSpecName: "utilities") pod "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" (UID: "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.135860 4793 scope.go:117] "RemoveContainer" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.153419 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths" (OuterVolumeSpecName: "kube-api-access-cvths") pod "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" (UID: "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94"). InnerVolumeSpecName "kube-api-access-cvths". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.184449 4793 scope.go:117] "RemoveContainer" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.192748 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" (UID: "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.216282 4793 scope.go:117] "RemoveContainer" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" Jan 26 23:05:58 crc kubenswrapper[4793]: E0126 23:05:58.216817 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b\": container with ID starting with 896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b not found: ID does not exist" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.216877 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b"} err="failed to get container status \"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b\": rpc error: code = NotFound desc = could not find container \"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b\": container with ID starting with 896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b not found: ID does not exist" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.216910 4793 scope.go:117] "RemoveContainer" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" Jan 26 23:05:58 crc kubenswrapper[4793]: E0126 23:05:58.217322 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8\": container with ID starting with a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8 not found: ID does not exist" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.217365 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8"} err="failed to get container status \"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8\": rpc error: code = NotFound desc = could not find container \"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8\": container with ID starting with a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8 not found: ID does not exist" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.217394 4793 scope.go:117] "RemoveContainer" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" Jan 26 23:05:58 crc kubenswrapper[4793]: E0126 23:05:58.217688 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842\": container with ID starting with 2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842 not found: ID does not exist" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.217727 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842"} err="failed to get container status \"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842\": rpc error: code = NotFound desc = could not find container \"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842\": container with ID starting with 2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842 not found: ID does not exist" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.230171 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.230230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") on node \"crc\" DevicePath \"\"" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.230243 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.453497 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.461458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:59 crc kubenswrapper[4793]: I0126 23:05:59.768957 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" path="/var/lib/kubelet/pods/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94/volumes" Jan 26 23:06:02 crc kubenswrapper[4793]: I0126 23:06:02.761406 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:02 crc kubenswrapper[4793]: E0126 23:06:02.761940 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:06 crc kubenswrapper[4793]: I0126 23:06:06.782755 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-517e-account-create-update-z5vlt_a62f2482-4c97-42ed-92d7-1b0dc5319971/mariadb-account-create-update/0.log" Jan 26 23:06:07 crc kubenswrapper[4793]: I0126 23:06:07.317577 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-564b65fb54-5sx8h_2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348/keystone-api/0.log" Jan 26 23:06:07 crc kubenswrapper[4793]: I0126 23:06:07.890990 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-5hcwl_66c97183-25d5-4ca6-bede-619df68d1471/keystone-bootstrap/0.log" Jan 26 23:06:08 crc kubenswrapper[4793]: I0126 23:06:08.452377 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-cron-29491141-mgww5_86415a48-3c2d-430b-a38c-f43e1c518984/keystone-cron/0.log" Jan 26 23:06:08 crc kubenswrapper[4793]: I0126 23:06:08.971811 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-566zk_74d39f02-fd88-4b8f-8f71-0f4f1386ad9a/mariadb-database-create/0.log" Jan 26 23:06:09 crc kubenswrapper[4793]: I0126 23:06:09.488620 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-zhxm7_288bfc86-c5a0-401c-bbda-ea48bbbd855c/keystone-db-sync/0.log" Jan 26 23:06:10 crc kubenswrapper[4793]: I0126 23:06:10.259313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_d933c86b-bb42-4d94-9eb2-65888a5e95ab/memcached/0.log" Jan 26 23:06:10 crc kubenswrapper[4793]: I0126 23:06:10.833787 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1fc3daf2-84e7-4831-9eae-155632f1b0cd/galera/0.log" Jan 26 23:06:11 crc kubenswrapper[4793]: I0126 23:06:11.407699 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_9cf954a6-b71a-49f9-93c5-618d4e944159/galera/0.log" Jan 26 23:06:11 crc kubenswrapper[4793]: I0126 23:06:11.962511 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_dfe7ef28-7eb1-48c8-b2bb-8990933e8971/openstackclient/0.log" Jan 26 23:06:12 crc kubenswrapper[4793]: I0126 23:06:12.488023 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-655b5d9d44-xt9hv_a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e/placement-log/0.log" Jan 26 23:06:12 crc kubenswrapper[4793]: I0126 23:06:12.931776 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-ca47-account-create-update-8c675_05214d2f-078c-43c5-bf4d-c8d80580ce8f/mariadb-account-create-update/0.log" Jan 26 23:06:13 crc kubenswrapper[4793]: I0126 23:06:13.323016 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-bz7cb_466a301d-4d91-4f49-831a-7d7f07ecd1bd/mariadb-database-create/0.log" Jan 26 23:06:13 crc kubenswrapper[4793]: I0126 23:06:13.752429 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-rn9xt_26147a27-d0db-498e-bf1d-fb496d6a7b48/placement-db-sync/0.log" Jan 26 23:06:14 crc kubenswrapper[4793]: I0126 23:06:14.232988 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a/rabbitmq/0.log" Jan 26 23:06:14 crc kubenswrapper[4793]: I0126 23:06:14.774257 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_a8704cd7-a5e9-45ca-9886-cef2f797c7f1/rabbitmq/0.log" Jan 26 23:06:15 crc kubenswrapper[4793]: I0126 23:06:15.271369 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_70c43064-b9c2-4f01-9bc2-5431c5dca494/rabbitmq/0.log" Jan 26 23:06:15 crc kubenswrapper[4793]: I0126 23:06:15.741937 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_733142d4-49c6-4e25-a160-78aa1118d296/rabbitmq/0.log" Jan 26 23:06:16 crc kubenswrapper[4793]: I0126 23:06:16.265951 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-hrh5x_bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a/mariadb-account-create-update/0.log" Jan 26 23:06:16 crc kubenswrapper[4793]: I0126 23:06:16.762040 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:16 crc kubenswrapper[4793]: E0126 23:06:16.762993 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:28 crc kubenswrapper[4793]: I0126 23:06:28.761670 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:28 crc kubenswrapper[4793]: E0126 23:06:28.762438 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:39 crc kubenswrapper[4793]: I0126 23:06:39.761145 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:39 crc kubenswrapper[4793]: E0126 23:06:39.762816 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:46 crc kubenswrapper[4793]: I0126 23:06:46.664415 4793 scope.go:117] "RemoveContainer" containerID="fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f" Jan 26 23:06:48 crc kubenswrapper[4793]: I0126 23:06:48.992287 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj_e023476b-0850-4e37-97da-0cd1a5d9425f/extract/0.log" Jan 26 23:06:49 crc kubenswrapper[4793]: I0126 23:06:49.433727 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf_fcc64fc2-d867-4a08-8df7-0464de3c133e/extract/0.log" Jan 26 23:06:49 crc kubenswrapper[4793]: I0126 23:06:49.862231 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6987f66698-542qx_71c7d600-2dac-4e19-a33f-0311a8342774/manager/0.log" Jan 26 23:06:50 crc kubenswrapper[4793]: I0126 23:06:50.262664 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-dgk2z_58fa089e-6ccd-4521-88ff-e65e6928b738/manager/0.log" Jan 26 23:06:50 crc kubenswrapper[4793]: I0126 23:06:50.705159 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-vnt9q_4190249d-f34c-4b1a-a4da-038f7a806fc6/manager/0.log" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.119821 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-cpwtx_f15117fc-93f9-498b-b831-e87094aa991e/manager/0.log" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.523693 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-954b94f75-fnjqt_18b41715-6c23-4821-b0da-1c17b5010375/manager/0.log" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.761530 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:51 crc kubenswrapper[4793]: E0126 23:06:51.761847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.909720 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkwjg_42a387a4-faad-41fe-bfa9-0f600a06e6e0/manager/0.log" Jan 26 23:06:52 crc kubenswrapper[4793]: I0126 23:06:52.405051 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-8zm8h_eb8ed64e-38aa-4e1f-be80-29be415125fd/manager/0.log" Jan 26 23:06:52 crc kubenswrapper[4793]: I0126 23:06:52.821805 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-7twhn_bcf4192a-ff9b-4b02-83d0-a17ecd3ba795/manager/0.log" Jan 26 23:06:53 crc kubenswrapper[4793]: I0126 23:06:53.276856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-g2r87_22d5cae5-26fb-4a47-97ac-78ba7120d29c/manager/0.log" Jan 26 23:06:53 crc kubenswrapper[4793]: I0126 23:06:53.691757 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-flkdn_c6fcff5b-7bdb-4607-bd4d-389c863ddba6/manager/0.log" Jan 26 23:06:54 crc kubenswrapper[4793]: I0126 23:06:54.144959 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf_da1ca3d4-2bae-4133-88c8-b67f74ebc6ab/manager/0.log" Jan 26 23:06:54 crc kubenswrapper[4793]: I0126 23:06:54.558886 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vxk8h_b934b85e-14a0-4ad2-bdc0-82280eb346a9/manager/0.log" Jan 26 23:06:54 crc kubenswrapper[4793]: I0126 23:06:54.993051 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5b4fc7b894-6xkrk_209b817f-0bec-4d6b-814c-ae2a07913a56/manager/0.log" Jan 26 23:06:55 crc kubenswrapper[4793]: I0126 23:06:55.473923 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-864fr_c0a1696e-8859-4708-84f1-57b65d7cc16a/registry-server/0.log" Jan 26 23:06:55 crc kubenswrapper[4793]: I0126 23:06:55.900065 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-756f86fc74-mvr7p_4f086064-ee5a-47cf-bf97-fc2a423d8c33/manager/0.log" Jan 26 23:06:56 crc kubenswrapper[4793]: I0126 23:06:56.300327 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4_86ca275a-4c50-479e-9ed5-4e03dda309cf/manager/0.log" Jan 26 23:06:56 crc kubenswrapper[4793]: I0126 23:06:56.914331 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c5bb8bdbb-6lmgq_b1cd2fec-ba5b-4984-90c5-565df4ef5cd1/manager/0.log" Jan 26 23:06:57 crc kubenswrapper[4793]: I0126 23:06:57.324588 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j9sn2_b2f6b1be-cc0d-47e0-8375-704e63bf5a2b/registry-server/0.log" Jan 26 23:06:57 crc kubenswrapper[4793]: I0126 23:06:57.730029 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-2jn8d_6e0878fd-53dc-4048-9267-2520c5919067/manager/0.log" Jan 26 23:06:58 crc kubenswrapper[4793]: I0126 23:06:58.151140 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-bp8k8_7968980a-764c-4cd2-b77f-ed33fffbd294/manager/0.log" Jan 26 23:06:58 crc kubenswrapper[4793]: I0126 23:06:58.581564 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vzmsv_d4c8a5bf-4e12-46b4-9d2a-1a098416e90a/operator/0.log" Jan 26 23:06:58 crc kubenswrapper[4793]: I0126 23:06:58.979952 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-rrmcg_06401d29-ca17-43df-865a-26523cf84f67/manager/0.log" Jan 26 23:06:59 crc kubenswrapper[4793]: I0126 23:06:59.391784 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-j292m_65c6d17b-b112-421c-a686-ce1601f91181/manager/0.log" Jan 26 23:06:59 crc kubenswrapper[4793]: I0126 23:06:59.781414 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-sgcrb_6e38b64f-cd6a-4cac-a125-be388bb0dc78/manager/0.log" Jan 26 23:07:00 crc kubenswrapper[4793]: I0126 23:07:00.186540 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-cb484894b-qxf7r_d0e46999-98e6-40a6-99a2-f4c01dad0f81/manager/0.log" Jan 26 23:07:04 crc kubenswrapper[4793]: I0126 23:07:04.761501 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:04 crc kubenswrapper[4793]: E0126 23:07:04.762277 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:04 crc kubenswrapper[4793]: I0126 23:07:04.919438 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-517e-account-create-update-z5vlt_a62f2482-4c97-42ed-92d7-1b0dc5319971/mariadb-account-create-update/0.log" Jan 26 23:07:05 crc kubenswrapper[4793]: I0126 23:07:05.402345 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-564b65fb54-5sx8h_2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348/keystone-api/0.log" Jan 26 23:07:05 crc kubenswrapper[4793]: I0126 23:07:05.952075 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-5hcwl_66c97183-25d5-4ca6-bede-619df68d1471/keystone-bootstrap/0.log" Jan 26 23:07:06 crc kubenswrapper[4793]: I0126 23:07:06.522141 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-cron-29491141-mgww5_86415a48-3c2d-430b-a38c-f43e1c518984/keystone-cron/0.log" Jan 26 23:07:07 crc kubenswrapper[4793]: I0126 23:07:07.068195 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-566zk_74d39f02-fd88-4b8f-8f71-0f4f1386ad9a/mariadb-database-create/0.log" Jan 26 23:07:07 crc kubenswrapper[4793]: I0126 23:07:07.567774 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-zhxm7_288bfc86-c5a0-401c-bbda-ea48bbbd855c/keystone-db-sync/0.log" Jan 26 23:07:08 crc kubenswrapper[4793]: I0126 23:07:08.271768 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_d933c86b-bb42-4d94-9eb2-65888a5e95ab/memcached/0.log" Jan 26 23:07:08 crc kubenswrapper[4793]: I0126 23:07:08.823700 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1fc3daf2-84e7-4831-9eae-155632f1b0cd/galera/0.log" Jan 26 23:07:09 crc kubenswrapper[4793]: I0126 23:07:09.347349 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_9cf954a6-b71a-49f9-93c5-618d4e944159/galera/0.log" Jan 26 23:07:09 crc kubenswrapper[4793]: I0126 23:07:09.837725 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_dfe7ef28-7eb1-48c8-b2bb-8990933e8971/openstackclient/0.log" Jan 26 23:07:10 crc kubenswrapper[4793]: I0126 23:07:10.349938 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-655b5d9d44-xt9hv_a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e/placement-log/0.log" Jan 26 23:07:10 crc kubenswrapper[4793]: I0126 23:07:10.819246 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-ca47-account-create-update-8c675_05214d2f-078c-43c5-bf4d-c8d80580ce8f/mariadb-account-create-update/0.log" Jan 26 23:07:11 crc kubenswrapper[4793]: I0126 23:07:11.206444 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-bz7cb_466a301d-4d91-4f49-831a-7d7f07ecd1bd/mariadb-database-create/0.log" Jan 26 23:07:11 crc kubenswrapper[4793]: I0126 23:07:11.621521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-rn9xt_26147a27-d0db-498e-bf1d-fb496d6a7b48/placement-db-sync/0.log" Jan 26 23:07:12 crc kubenswrapper[4793]: I0126 23:07:12.085580 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a/rabbitmq/0.log" Jan 26 23:07:12 crc kubenswrapper[4793]: I0126 23:07:12.502686 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_a8704cd7-a5e9-45ca-9886-cef2f797c7f1/rabbitmq/0.log" Jan 26 23:07:12 crc kubenswrapper[4793]: I0126 23:07:12.943681 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_70c43064-b9c2-4f01-9bc2-5431c5dca494/rabbitmq/0.log" Jan 26 23:07:13 crc kubenswrapper[4793]: I0126 23:07:13.360326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_733142d4-49c6-4e25-a160-78aa1118d296/rabbitmq/0.log" Jan 26 23:07:13 crc kubenswrapper[4793]: I0126 23:07:13.793554 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-hrh5x_bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a/mariadb-account-create-update/0.log" Jan 26 23:07:16 crc kubenswrapper[4793]: I0126 23:07:16.761501 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:16 crc kubenswrapper[4793]: E0126 23:07:16.762159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340560 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:17 crc kubenswrapper[4793]: E0126 23:07:17.340854 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-utilities" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340869 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-utilities" Jan 26 23:07:17 crc kubenswrapper[4793]: E0126 23:07:17.340885 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340891 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" Jan 26 23:07:17 crc kubenswrapper[4793]: E0126 23:07:17.340917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-content" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340923 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-content" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.341059 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.342562 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.360481 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.421155 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.421496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.421636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.523130 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.523545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.523666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.524072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.524339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.550662 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.663886 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.063683 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.714474 4793 generic.go:334] "Generic (PLEG): container finished" podID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" exitCode=0 Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.714519 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094"} Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.714544 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerStarted","Data":"461ed08c8856de812b1bf4c2e17198e800788dacf0634ecdba5b2a28f3da782c"} Jan 26 23:07:20 crc kubenswrapper[4793]: I0126 23:07:20.732440 4793 generic.go:334] "Generic (PLEG): container finished" podID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" exitCode=0 Jan 26 23:07:20 crc kubenswrapper[4793]: I0126 23:07:20.732591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209"} Jan 26 23:07:21 crc kubenswrapper[4793]: I0126 23:07:21.740296 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerStarted","Data":"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e"} Jan 26 23:07:21 crc kubenswrapper[4793]: I0126 23:07:21.769147 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvsz6" podStartSLOduration=2.336948925 podStartE2EDuration="4.769127112s" podCreationTimestamp="2026-01-26 23:07:17 +0000 UTC" firstStartedPulling="2026-01-26 23:07:18.717080148 +0000 UTC m=+1653.705851660" lastFinishedPulling="2026-01-26 23:07:21.149258335 +0000 UTC m=+1656.138029847" observedRunningTime="2026-01-26 23:07:21.761364061 +0000 UTC m=+1656.750135583" watchObservedRunningTime="2026-01-26 23:07:21.769127112 +0000 UTC m=+1656.757898624" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.665235 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.665623 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.721419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.837021 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.953543 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:29 crc kubenswrapper[4793]: I0126 23:07:29.802779 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvsz6" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" containerID="cri-o://3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" gracePeriod=2 Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.789250 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832783 4793 generic.go:334] "Generic (PLEG): container finished" podID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" exitCode=0 Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e"} Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"461ed08c8856de812b1bf4c2e17198e800788dacf0634ecdba5b2a28f3da782c"} Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832873 4793 scope.go:117] "RemoveContainer" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832928 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.850425 4793 scope.go:117] "RemoveContainer" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.874102 4793 scope.go:117] "RemoveContainer" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.897986 4793 scope.go:117] "RemoveContainer" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" Jan 26 23:07:30 crc kubenswrapper[4793]: E0126 23:07:30.898420 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e\": container with ID starting with 3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e not found: ID does not exist" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898461 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e"} err="failed to get container status \"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e\": rpc error: code = NotFound desc = could not find container \"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e\": container with ID starting with 3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e not found: ID does not exist" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898486 4793 scope.go:117] "RemoveContainer" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" Jan 26 23:07:30 crc kubenswrapper[4793]: E0126 23:07:30.898720 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209\": container with ID starting with 7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209 not found: ID does not exist" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898747 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209"} err="failed to get container status \"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209\": rpc error: code = NotFound desc = could not find container \"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209\": container with ID starting with 7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209 not found: ID does not exist" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898766 4793 scope.go:117] "RemoveContainer" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" Jan 26 23:07:30 crc kubenswrapper[4793]: E0126 23:07:30.898999 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094\": container with ID starting with 06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094 not found: ID does not exist" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.899020 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094"} err="failed to get container status \"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094\": rpc error: code = NotFound desc = could not find container \"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094\": container with ID starting with 06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094 not found: ID does not exist" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.933928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.933984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.934212 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.935055 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities" (OuterVolumeSpecName: "utilities") pod "3db0b8a0-1455-4405-8eba-9dfb8223b6e1" (UID: "3db0b8a0-1455-4405-8eba-9dfb8223b6e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.939678 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7" (OuterVolumeSpecName: "kube-api-access-xkzw7") pod "3db0b8a0-1455-4405-8eba-9dfb8223b6e1" (UID: "3db0b8a0-1455-4405-8eba-9dfb8223b6e1"). InnerVolumeSpecName "kube-api-access-xkzw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.959022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3db0b8a0-1455-4405-8eba-9dfb8223b6e1" (UID: "3db0b8a0-1455-4405-8eba-9dfb8223b6e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.037141 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.037261 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.037273 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") on node \"crc\" DevicePath \"\"" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.161626 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.168119 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.760953 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:31 crc kubenswrapper[4793]: E0126 23:07:31.761573 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.770089 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" path="/var/lib/kubelet/pods/3db0b8a0-1455-4405-8eba-9dfb8223b6e1/volumes" Jan 26 23:07:43 crc kubenswrapper[4793]: I0126 23:07:43.762063 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:43 crc kubenswrapper[4793]: E0126 23:07:43.763555 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:44 crc kubenswrapper[4793]: I0126 23:07:44.315645 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj_e023476b-0850-4e37-97da-0cd1a5d9425f/extract/0.log" Jan 26 23:07:44 crc kubenswrapper[4793]: I0126 23:07:44.708896 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf_fcc64fc2-d867-4a08-8df7-0464de3c133e/extract/0.log" Jan 26 23:07:45 crc kubenswrapper[4793]: I0126 23:07:45.082846 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6987f66698-542qx_71c7d600-2dac-4e19-a33f-0311a8342774/manager/0.log" Jan 26 23:07:45 crc kubenswrapper[4793]: I0126 23:07:45.461878 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-dgk2z_58fa089e-6ccd-4521-88ff-e65e6928b738/manager/0.log" Jan 26 23:07:45 crc kubenswrapper[4793]: I0126 23:07:45.846206 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-vnt9q_4190249d-f34c-4b1a-a4da-038f7a806fc6/manager/0.log" Jan 26 23:07:46 crc kubenswrapper[4793]: I0126 23:07:46.274622 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-cpwtx_f15117fc-93f9-498b-b831-e87094aa991e/manager/0.log" Jan 26 23:07:46 crc kubenswrapper[4793]: I0126 23:07:46.708076 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-954b94f75-fnjqt_18b41715-6c23-4821-b0da-1c17b5010375/manager/0.log" Jan 26 23:07:47 crc kubenswrapper[4793]: I0126 23:07:47.105507 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkwjg_42a387a4-faad-41fe-bfa9-0f600a06e6e0/manager/0.log" Jan 26 23:07:47 crc kubenswrapper[4793]: I0126 23:07:47.570975 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-8zm8h_eb8ed64e-38aa-4e1f-be80-29be415125fd/manager/0.log" Jan 26 23:07:47 crc kubenswrapper[4793]: I0126 23:07:47.978548 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-7twhn_bcf4192a-ff9b-4b02-83d0-a17ecd3ba795/manager/0.log" Jan 26 23:07:48 crc kubenswrapper[4793]: I0126 23:07:48.433859 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-g2r87_22d5cae5-26fb-4a47-97ac-78ba7120d29c/manager/0.log" Jan 26 23:07:48 crc kubenswrapper[4793]: I0126 23:07:48.849959 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-flkdn_c6fcff5b-7bdb-4607-bd4d-389c863ddba6/manager/0.log" Jan 26 23:07:49 crc kubenswrapper[4793]: I0126 23:07:49.246952 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf_da1ca3d4-2bae-4133-88c8-b67f74ebc6ab/manager/0.log" Jan 26 23:07:49 crc kubenswrapper[4793]: I0126 23:07:49.681366 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vxk8h_b934b85e-14a0-4ad2-bdc0-82280eb346a9/manager/0.log" Jan 26 23:07:50 crc kubenswrapper[4793]: I0126 23:07:50.107872 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5b4fc7b894-6xkrk_209b817f-0bec-4d6b-814c-ae2a07913a56/manager/0.log" Jan 26 23:07:50 crc kubenswrapper[4793]: I0126 23:07:50.584709 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-864fr_c0a1696e-8859-4708-84f1-57b65d7cc16a/registry-server/0.log" Jan 26 23:07:51 crc kubenswrapper[4793]: I0126 23:07:51.001828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-756f86fc74-mvr7p_4f086064-ee5a-47cf-bf97-fc2a423d8c33/manager/0.log" Jan 26 23:07:51 crc kubenswrapper[4793]: I0126 23:07:51.435399 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4_86ca275a-4c50-479e-9ed5-4e03dda309cf/manager/0.log" Jan 26 23:07:52 crc kubenswrapper[4793]: I0126 23:07:52.036140 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c5bb8bdbb-6lmgq_b1cd2fec-ba5b-4984-90c5-565df4ef5cd1/manager/0.log" Jan 26 23:07:52 crc kubenswrapper[4793]: I0126 23:07:52.450060 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j9sn2_b2f6b1be-cc0d-47e0-8375-704e63bf5a2b/registry-server/0.log" Jan 26 23:07:52 crc kubenswrapper[4793]: I0126 23:07:52.878377 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-2jn8d_6e0878fd-53dc-4048-9267-2520c5919067/manager/0.log" Jan 26 23:07:53 crc kubenswrapper[4793]: I0126 23:07:53.305701 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-bp8k8_7968980a-764c-4cd2-b77f-ed33fffbd294/manager/0.log" Jan 26 23:07:53 crc kubenswrapper[4793]: I0126 23:07:53.745911 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vzmsv_d4c8a5bf-4e12-46b4-9d2a-1a098416e90a/operator/0.log" Jan 26 23:07:54 crc kubenswrapper[4793]: I0126 23:07:54.148441 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-rrmcg_06401d29-ca17-43df-865a-26523cf84f67/manager/0.log" Jan 26 23:07:54 crc kubenswrapper[4793]: I0126 23:07:54.552643 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-j292m_65c6d17b-b112-421c-a686-ce1601f91181/manager/0.log" Jan 26 23:07:54 crc kubenswrapper[4793]: I0126 23:07:54.980424 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-sgcrb_6e38b64f-cd6a-4cac-a125-be388bb0dc78/manager/0.log" Jan 26 23:07:55 crc kubenswrapper[4793]: I0126 23:07:55.378722 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-cb484894b-qxf7r_d0e46999-98e6-40a6-99a2-f4c01dad0f81/manager/0.log" Jan 26 23:07:57 crc kubenswrapper[4793]: I0126 23:07:57.761421 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:57 crc kubenswrapper[4793]: E0126 23:07:57.762401 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:10 crc kubenswrapper[4793]: I0126 23:08:10.760712 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:10 crc kubenswrapper[4793]: E0126 23:08:10.761598 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.305558 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:17 crc kubenswrapper[4793]: E0126 23:08:17.306684 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306700 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" Jan 26 23:08:17 crc kubenswrapper[4793]: E0126 23:08:17.306727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-content" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306733 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-content" Jan 26 23:08:17 crc kubenswrapper[4793]: E0126 23:08:17.306748 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-utilities" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306755 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-utilities" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306900 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.307723 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.310880 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nnzkd"/"default-dockercfg-j2fv4" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.311005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nnzkd"/"kube-root-ca.crt" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.313060 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nnzkd"/"openshift-service-ca.crt" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.330423 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.359540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.359603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.460972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.461123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.461869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.483409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.629352 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:18 crc kubenswrapper[4793]: I0126 23:08:18.080171 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:18 crc kubenswrapper[4793]: W0126 23:08:18.097487 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1022863d_ebfc_49a0_85f1_a89a8f6fb03c.slice/crio-d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398 WatchSource:0}: Error finding container d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398: Status 404 returned error can't find the container with id d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398 Jan 26 23:08:18 crc kubenswrapper[4793]: I0126 23:08:18.099994 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:08:18 crc kubenswrapper[4793]: I0126 23:08:18.165251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" event={"ID":"1022863d-ebfc-49a0-85f1-a89a8f6fb03c","Type":"ContainerStarted","Data":"d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398"} Jan 26 23:08:21 crc kubenswrapper[4793]: I0126 23:08:21.761441 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:21 crc kubenswrapper[4793]: E0126 23:08:21.762081 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.483155 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.484013 4793 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 23:08:32 crc kubenswrapper[4793]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Jan 26 23:08:32 crc kubenswrapper[4793]: HAVE_SESSION_TOOLS=true Jan 26 23:08:32 crc kubenswrapper[4793]: else Jan 26 23:08:32 crc kubenswrapper[4793]: HAVE_SESSION_TOOLS=false Jan 26 23:08:32 crc kubenswrapper[4793]: fi Jan 26 23:08:32 crc kubenswrapper[4793]: Jan 26 23:08:32 crc kubenswrapper[4793]: Jan 26 23:08:32 crc kubenswrapper[4793]: echo "[disk usage checker] Started" Jan 26 23:08:32 crc kubenswrapper[4793]: target_dir="/must-gather" Jan 26 23:08:32 crc kubenswrapper[4793]: usage_percentage_limit="80" Jan 26 23:08:32 crc kubenswrapper[4793]: while true; do Jan 26 23:08:32 crc kubenswrapper[4793]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Jan 26 23:08:32 crc kubenswrapper[4793]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Jan 26 23:08:32 crc kubenswrapper[4793]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Jan 26 23:08:32 crc kubenswrapper[4793]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Jan 26 23:08:32 crc kubenswrapper[4793]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 26 23:08:32 crc kubenswrapper[4793]: ps -o sess --no-headers | sort -u | while read sid; do Jan 26 23:08:32 crc kubenswrapper[4793]: [[ "$sid" -eq "${$}" ]] && continue Jan 26 23:08:32 crc kubenswrapper[4793]: pkill --signal SIGKILL --session "$sid" Jan 26 23:08:32 crc kubenswrapper[4793]: done Jan 26 23:08:32 crc kubenswrapper[4793]: else Jan 26 23:08:32 crc kubenswrapper[4793]: kill 0 Jan 26 23:08:32 crc kubenswrapper[4793]: fi Jan 26 23:08:32 crc kubenswrapper[4793]: exit 1 Jan 26 23:08:32 crc kubenswrapper[4793]: fi Jan 26 23:08:32 crc kubenswrapper[4793]: sleep 5 Jan 26 23:08:32 crc kubenswrapper[4793]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 26 23:08:32 crc kubenswrapper[4793]: setsid -w bash <<-MUSTGATHER_EOF Jan 26 23:08:32 crc kubenswrapper[4793]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 26 23:08:32 crc kubenswrapper[4793]: MUSTGATHER_EOF Jan 26 23:08:32 crc kubenswrapper[4793]: else Jan 26 23:08:32 crc kubenswrapper[4793]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 26 23:08:32 crc kubenswrapper[4793]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hl6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-j7rpq_openshift-must-gather-nnzkd(1022863d-ebfc-49a0-85f1-a89a8f6fb03c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 26 23:08:32 crc kubenswrapper[4793]: > logger="UnhandledError" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.486369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" podUID="1022863d-ebfc-49a0-85f1-a89a8f6fb03c" Jan 26 23:08:32 crc kubenswrapper[4793]: I0126 23:08:32.760870 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.761107 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:33 crc kubenswrapper[4793]: E0126 23:08:33.275967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" podUID="1022863d-ebfc-49a0-85f1-a89a8f6fb03c" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.294813 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.301976 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.625336 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.761066 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:44 crc kubenswrapper[4793]: E0126 23:08:44.761409 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.805854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.806163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1022863d-ebfc-49a0-85f1-a89a8f6fb03c" (UID: "1022863d-ebfc-49a0-85f1-a89a8f6fb03c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.806360 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.806664 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.815477 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k" (OuterVolumeSpecName: "kube-api-access-8hl6k") pod "1022863d-ebfc-49a0-85f1-a89a8f6fb03c" (UID: "1022863d-ebfc-49a0-85f1-a89a8f6fb03c"). InnerVolumeSpecName "kube-api-access-8hl6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.908952 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:45 crc kubenswrapper[4793]: I0126 23:08:45.386906 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:45 crc kubenswrapper[4793]: I0126 23:08:45.772018 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1022863d-ebfc-49a0-85f1-a89a8f6fb03c" path="/var/lib/kubelet/pods/1022863d-ebfc-49a0-85f1-a89a8f6fb03c/volumes" Jan 26 23:08:56 crc kubenswrapper[4793]: I0126 23:08:56.761241 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:56 crc kubenswrapper[4793]: E0126 23:08:56.762051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:09:08 crc kubenswrapper[4793]: I0126 23:09:08.760308 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:09:08 crc kubenswrapper[4793]: E0126 23:09:08.761099 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.065754 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.074067 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.080998 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.087316 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.094681 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.100629 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.028781 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.038326 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.771670 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" path="/var/lib/kubelet/pods/05214d2f-078c-43c5-bf4d-c8d80580ce8f/volumes" Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.772454 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" path="/var/lib/kubelet/pods/466a301d-4d91-4f49-831a-7d7f07ecd1bd/volumes" Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.773057 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" path="/var/lib/kubelet/pods/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a/volumes" Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.773623 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" path="/var/lib/kubelet/pods/a62f2482-4c97-42ed-92d7-1b0dc5319971/volumes" Jan 26 23:09:20 crc kubenswrapper[4793]: I0126 23:09:20.031648 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 23:09:20 crc kubenswrapper[4793]: I0126 23:09:20.038588 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 23:09:21 crc kubenswrapper[4793]: I0126 23:09:21.761106 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:09:21 crc kubenswrapper[4793]: E0126 23:09:21.761909 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:09:21 crc kubenswrapper[4793]: I0126 23:09:21.770133 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" path="/var/lib/kubelet/pods/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a/volumes" Jan 26 23:09:35 crc kubenswrapper[4793]: I0126 23:09:35.766977 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:09:35 crc kubenswrapper[4793]: E0126 23:09:35.767876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135772075024461 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135772075017376 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135766214016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135766215015470 5ustar corecore